Data-driven multifidelity topology design with multi-channel variational auto-encoder for concurrent optimization of multiple design variable fields
Abstract
Topology optimization can generate high-performance structures with a high degree of freedom. Regardless, it generally confronts entrapment in undesirable local optima especially in problems characterized by strong non-linearity. This study aims to establish a gradient-free topology optimization framework that facilitates more global solution searches to avoid the entrapment. The framework utilizes a data-driven multifidelity topology design (MFTD), where solution candidates initially generated by solving low-fidelity (LF) optimization problems are iteratively updated by a variational auto-encoder (VAE) and high-fidelity (HF) evaluation. A key procedure of the solution update is to construct HF models by extruding material distributions obtained by the VAE to a specified thickness called HF modeling parameter, which is conventionally constant across all solution candidates. This constant assignment leads to no exploration of the HF modeling parameter space, which necessitates extensive parametric studies outside the optimization loop. To enable a more comprehensive optimization in a single run, we propose a multi-channel image data architecture that stores material distributions in the first channel and HF modeling parameters in the second or subsequent channels. This significant shift enables a thorough exploration of the HF modeling parameter space with no necessity of parametric studies afterwards, by simultaneously optimizing both material distributions and HF modeling parameters. We apply the framework to a maximum stress minimization problem, where the LF optimization problem is formulated with approximation techniques, whereas the HF evaluation is conducted by accurately analyzing the stress field, bypassing any approximation techniques. We first validate that the framework can successfully identify high-performance solutions superior to the reference solutions by effectively exploring both material distributions and HF modeling parameters in a fundamental stiffness maximization. Then we demonstrate the framework can identify promising solutions for the original maximum stress minimization problems.
keywords:
Topology optimization , data-driven approach , evolutionary algorithm , mini-max optimization problem , multi-channel variational auto-encoder1 Introduction
In structural design, there is an increasing demand for designing lightweight structures that are optimized for performances such as stiffness and strength. One of the most successful tools to meet the demand is topology optimization, the concept of which is based on a formulation that shifts the focus from optimizing structural configurations to optimizing material distribution within the design domain, as introduced by Bendsøe and Kikuchi [1]. This elegantly simple yet effective formulation renders topology optimization sufficiently versatile for addressing a wide range of design problems, such as structural design and thermal-fluid design [2, 3].
In practical use, topology optimization has been predominantly utilized for stiffness maximization problems that are one of the most fundamental optimization problem in structural design [4]. Despite its widespread application in such contexts, topology optimization has been less frequently applied to problems characterized by strong non-linearity, such as maximum stress minimization and buckling load maximization, compared to its use in stiffness maximization. Notably, maximum stress minimization is crucial for designing structures within industrial applications, where failure criteria are primarily stress-based. Consequently, numerous researchers are actively tackling the challenges associated with maximum stress minimization problems [5, 6, 7].
Maximum stress minimization has a notable challenge called “singularity phenomenon.” This challenge arises when material removal leads to rapid increases in stress, causing numerical instabilities. To mitigate this, traditional strategies have adopted stress-relaxation techniques. Moreover, because discrete maximum stress values are impractical for optimization, these methods often resort to approximation techniques, substituting them with more continuous functions. These approximations, however, introduce strong non-linearities, and in the prevalent approach of gradient-based topology optimization, there is a tendency to converge to local minima under these conditions. Consequently, traditional approaches, through their reliance on relaxation and approximation techniques, can be viewed as addressing a modified version of the original optimization problem, rather than directly solving itself.
To avoid entrapment in local optima and promote a more global search, gradient-free topology optimization methods, such as Evolutionary Algorithms (EAs) [8], could offer a viable approach for addressing the challenges posed by strong non-linearities, which motivated a number of researchers to explore the application of EAs to topology optimization [9, 10, 11]. Although EAs enable a broader exploration of the solution space, the scalability of design variables in EAs is constrained to the order of to , which is markedly less than the capacity of conventional gradient-based topology optimization methods, typically extending to the order of to [12]. This constraint arises because two primary procedures of EAs, i.e., crossover and mutation, conventionally handle design variables as individual entities with no consideration of their spatial relationships. Handling such a large number of design variables individually leads to the curse of dimensionality, where the EA operations require an exponentially increasing computational cost as the number of design variables increases.
To address the scalability issue of design variables in topology optimization, deep generative models [13] can be employed to reduce the dimension of the design space with a degree of freedom maintained. In the realm of engineering design, data-driven designs based on those deep generative models has been actively investigated recently [14, 15]. Guo et al. pioneered a data-driven topology optimization method that employs a Variational Auto-Encoder (VAE) [16] to encode and decode material distributions in the latent space, demonstrating the effectiveness of deep generative models in representing the topologically optimized designs [17]. Oh et al. introduced a Generative Adversarial Network (GAN) [18] to generate diverse material distributions, which successfully identified high-performance structures [19]. Wang et al. utilized a Latent Variable Gaussian Process (LVGP) to embed a mixed-variable inputs consisting of both qualitative microstructure concepts and quantitative microstructure design variables into a continuous and differentiable design space, enabling the multiscale topology optimization [20].
To bypass the curse of dimensionality in EA-based topology optimization, a series of researches have focused on a approach to replace the conventional crossover and mutation procedures with EA-like ones that inherit their essential concept but are specifically tuned for topology optimization requiring numerous design variables. Yamasaki et al. proposed a data-driven topology design based on a crossover-like operation using a VAE that can compress high-dimensional input data into a low-dimensional latent space and then reconstruct high-dimensional output from this latent space. They demonstrated that deep generative model-based crossover is effective to efficiently generate offspring material distributions that inherit the topological features from the parent ones [21]. Yaji et al. introduced a mutation-like operation that deduces a mutant material distribution by solving topology optimization additionally constrained with referential material distribution in the dataset under optimization [22]. They confirmed that the mutation-like operation is effective to prevent premature convergence in the iterative process through several numerical examples.
Moreover, Yaji et al. integrated this EA-based topology optimization method with Multi-Fidelity Topology Design (MFTD), based on the concept of multifidelity method known for its success in indirectly solving the original problem by addressing both low-fidelity (LF) and high-fidelity (HF) problems [23]. In MFTD, the original topology optimization problem is solved through two primary steps: LF optimization and HF evaluation. The beauty of this multifidelity formulation is that a topology optimization problem, analytically or numerically difficult to directly solve, can be indirectly solved by replacing itself with a simplified optimization problem, then ensuring mechanical performance accurately evaluated through the optimization process. The MFTD’s effectiveness has been evidently confirmed by a vast array of mechanical applications, such as heat exchangers and redox flow batteries [24, 25, 26].
To tackle the strong non-linearity and the inaccurate stress evaluation by approximation techniques in maximum stress minimization problems, Kato et al. applied the data-driven MFTD to a maximum stress minimization problem [27]. They confirmed the accurate stress evaluation and more global solution search of the data-driven MFTD resulted in material distributions superior to those derived from the conventional gradient-based topology optimization. Additionally, Kii et al. proposed a crossover-like operation called latent crossover [28] that enhances the convergence of the data-driven MFTD by generating offspring material distributions with latent vectors sampled using the simplex crossover [29] to inherit the parental characteristics more efficiently than the conventional random latent vectors uniformly sampled from the latent space.
Despite the significant progress in data-driven MFTD for maximum stress minimization, the conventional data-driven MFTD has a limitation in exploring parameters for constructing the HF model. In previous data-driven MFTD approaches, to construct the HF model, the thickness of the material distribution is typically assigned as a constant value across all solution candidates, with no exploration of the thickness value during the optimization process. However, the thickness of the material distribution, we refer to as HF modeling parameters, has a significant impact on structural performance metrics, such as stiffness and stress, for the entire structure. Therefore, exploring the HF parameter space in addition to the material distribution space is crucial for identifying high-performance structures. To explore the HF parameter space, the conventional data-driven MFTD requires extensive parametric studies, which often highly depend on user intuition.
In this study, to expand the optimization search to the HF modeling space, we focus on an architecture of image data used in the data-driven MFTD. The data-driven MFTD conventionally adopts a single-channel image to store material distributions, where HF modeling parameters are exclusively assigned as a constant value across each material distribution. Instead of representing material distribution as a single-channel image, we introduce a multi-channel image where design variables for constructing the HF model are stored in the second or subsequent channels of multi-channel images, with the material distributions maintained in the first channel. This significant shift enables a thorough exploration of the HF modeling parameter space with no necessity of parametric studies afterwards, by simultaneously optimizing both material distributions and HF modeling parameters.
To transition the data architecture in the data-driven MFTD from a single-channel to multi-channel image, we primarily consider two aspects for adaptation: the crossover operation facilitated by VAE and the mutation operation achieved through solving constrained LF optimization. Unlike the original data-driven MFTD, where the VAE’s machine learning model architecture consisted simply of dense layers, this study incorporates a convolutional neural network (CNN) to efficiently learn the relationship between material distributions in the first channel and the HF parameters in subsequent channels [30]. For the mutation operation, material distributions are generated in a manner similar to the traditional data-driven MFTD and then stored in the first channel of the multi-channel images. After preparing the first channel, subsequent channels are designated to globally represent the HF parameter space.
The primary goal of this study is to develop an enhanced data-driven MFTD capable of identifying high-performance structures within the HF modeling parameter space in addition to traditional material distribution space, especially for optimization problems characterized by significant non-linearity, such as maximum stress minimization. Initially, we apply this approach to stiffness maximization for a simple reinforced skin structure, where the thickness of the reinforcement plays a crucial role in determining structural performance metrics, such as stiffness and stress. Subsequently, we demonstrate the effectiveness of the proposed framework in addressing maximum stress minimization using the same structural model for the stiffness maximization.
The rest of the paper is organized as follows. In Section 2, we introduce a concept of data-driven MFTD for the simultaneous optimization of material distributions and HF modeling parameters using the multi-channel image, focusing on the significant shift of image data architecture. In Section 3, we describe the brief formulation of the optimization problem discussed in this study, including the LF optimization and HF evaluation. In Section 4, we elaborate the detailed implementation of the proposed framework, including the architecture of the multi-channel VAE and the EA-based optimization process. In Section 5, we examine the effectivenss of the proposed framework through numerical examples of stiffness maximization and maximum stress minimization. Finally, we conclude the paper in Section 6.
2 Framework
To facilitate the simultaneous optimization of the material distributions and HF modeling parameters, we first introduce a concept of MFTD specifically defined for the HF modeling parameter search. A multi-objective topology optimization problem is formulated in general, as follows:
(1) | ||||
subject to | ||||
where and represent objective and constraint functionals, respectively. denotes the design variable field, which is a binary material density field determined by the spatial posiion in the design domain , where denotes a predefined two- or three-dimensional design domain. The design variable field comprises binary material density field, In MFTD, solving this topology optimization problem directly is assumed to be difficult or even impossible in some cases, as it involves dealing with an enormous number of design variables and is formulated as a non-linear mathematical optimization problem.
To indirectly tackle the original intricate topology optimization problem, MFTD divides the original optimization problem into two components: a LF topology optimization and a HF evaluation. The LF topology optimization problem is formulated, as follows:
(2) | ||||
subject to | ||||
for given |
where and represent the objective and constraint functionals for the LF optimization problem, respectively. The vector consists of LF seeding parameters, which are used to generate a variety of design candidates by varying the LF topology optimization problem. The superscript with corresponds to different topology optimizations with respect to the points in . In other words, the optimization problem (2) is solved a total of times for each time using a different sample point from , resulting in distinct topologically optimized material distributions . Once all candidates from the LF topology optimization problems are collected, the HF evaluation is performed to identify the best solution, denoted as , where:
(3) |
Herein, HF seeding parameters are introduced as an extension to the conventional MFTD. These parameters are represented as a list of vectors denoted by . Each vector contains parameter distributions used to configure the Finite Element (FE) models for HF evaluation. The HF seeding parameters can include various configurations, such as thickness distributions for creating three-dimensional FE models by extruding the material distribution in the horizontal plane to a specified height in the vertical direction, as illustrated in Fig. 1. The original objective and constraint functionals, and , are utilized to find the best solutions in the HF evaluation. When , exhibits trade-offs with each other, resulting in the best solutions being Pareto optima represented by the vector that contains multiple sample point indices.

The effectiveness of achieving a satisfactory solution set depends on the diversity of candidates collected, considering a range of joint seeding parameters . Notably, both LF and HF seeding parameters have an impact on structural performance and should be optimized to eventually obtain high-performance structures. The conventional data-driven MFTD focuses on searching for satisfactory solutions in the material distribution space, that is initially determined by LF seeding parameters and automatically updated by the EA-based optimization process, as HF models are evaluated with a consistent configuration applied across all candidates. On the contrary, our proposed framework updates the initial candidates derived from various sets of LF and HF seeding parameters utilizing the EA-based optimization process, similar to the conventional one but specifically adaptive to the data architecture of a multi-channel image.
The overview of our proposed framework is illustrated in Fig. 2. Initially, a dataset of multi-channel images is prepared, encapsulating the necessary information for FE model construction, such as material distributions and thickness of reinforcements. This preparation involves solving the LF topology optimization (2) across a range of LF parameters to generate a variety of material distribution candidates, which are then stored in the first channel of the images. Subsequent channels are dedicated to HF parameters that define the mechanical performance of each solution, which may be represented as either constant values or distributions within the design domain.
Following the initial setup, steps 2 through 6 are executed in a loop until a convergence criterion is met. Step 2 involves evaluating each candidate solution through forward analysis on the HF model (e.g., static analysis in this context) to derive precise objective functionals. Here, material distributions from the first channel are mapped onto a three-dimensional design domain, with the HF model being constructed based on thickness values from the second channel. It should be noted that this framework has the flexibility to optimize not only thickness but also other physical properties through various HF model constructions.
Steps 3 to 6 encompass EA-based methods for autonomously refining design candidates. Step 3 selects promising solutions via elitism, while step 4 assesses convergence using the hypervolume indicator, a metric widely employed in multi-objective optimization to gauge the quality of optimized solution. If convergence is not achieved, the process advances to step 5a, where new candidates are generated through a generative model in a crossover-like operation, or to step 5b to finalize designs upon convergence. Whereas the conventional data-driven MFTD utilized a single-channel VAE (SC-VAE) for candidate generation, our framework employs a multi-channel VAE (MC-VAE) [31] to enable the neural network of VAE to learn the interactive relationship between material distributions and HF parameters. Periodically, the solution dataset undergoes mutation-like operation to introduce mutants that significantly diverge from the existing dataset yet likely survive the selection process. This mutation is guided by an additional constraint referencing the current dataset, with outcomes recorded in the first channel of the mutated multi-images. Although this discussion focuses on bi-objective scenarios, the framework is adaptable to problems with three or more objectives.

As for step 4, similar to the data-driven MFTD based on the SC-VAE, a convergence of the objective space is checked based on a hypervolume indicator for the selected solution set using a reference point , defined as follows [32]:
(4) |
where the Lebesgue measure, denoted by , is utilized to quantify the hypervolume indicator. Specifically to bi-objective problems, this indicator measures the area spanned by the non-dominated solutions, identified by , and a reference point . The optimization process is deemed to have converged when the hypervolume indicator HV falls below a predefined threshold .
2.1 Selection
As introduced in the previous data-driven MFTD research [21], the selection of promising solutions is performed using a concept of elitism through the application of the non-dominated sorting genetic algorithm II (NSGA-II) [33], which is a well-established algorithm in the realm of multi-objective EAs. NSGA-II focuses on generating a set of non-dominated solutions, satisfying the following relationship, especially in a multi-objective minimization problem:
(5) |
This relationship signifies that a solution is not dominated by another solution , meaning the -th solution is considered superior to the -th one.
NSGA-II selects solutions using non-dominated sorting and crowding distance sorting. In the non-dominated sorting phase, a set of solutions is categorized into different fronts based on the dominance relationship (5). The most dominant solutions, i.e., those not dominated by any other solution, are assigned to the first front called Pareto front. Successive fronts contain solutions that are dominated only by those in the previous fronts. Once solutions are sorted into fronts, the crowding distance for each solution is calculated. This metric measures the density of solutions surrounding a particular solution in the objective space. Within each front, solutions with larger crowding distances are given higher priority, as they are more isolated from other solutions and represent more diverse parts of the objective space. Finally, when selecting solutions for the next generation, NSGA-II starts with those in the lowest rank (first front) and proceeds to higher ranks. Within a rank, solutions with higher crowding distances are preferred. This process continues until the desired number of solutions for the next generation is achieved.
2.2 Crossover
2.2.1 Preprocessing image data
Before employing the MC-VAE to generate a new candidate solution set, the input data containing material distribution information undergo preprocessing to enable a deep generative model to efficiently learn the characteristics of the input data. This preprocessing consists of two steps: Design Domain Mapping (DDM) and the smoothing of the material distributions.
DDM, as proposed by Yamasaki et al., serves to map material distributions from the original domain to another domain while preserving the boundary conditions inherited from the original domain [34]. In the previous data-driven MFTD, DDM was employed to resize the original data to fit within a unit square domain denoted as , which was discretized with square elements, i.e., structured quad mesh, for VAE’s convenience. While the previous research focused primarily on resizing the mesh to a unit square, we extend its focus to mapping a 3D mesh onto a 2D plane mesh, in addition to mesh resizing. The detailed methodologies behind this dimensional extension is elaborated in the appendix A.
After mapping the material distributions from the original design domain to the unit plane mesh, these mapped material distributions undergo smoothing via the Helmholtz PDE [35], as formulated below:
(6) |
Here, represents the mapped material distribution, is the smoothed material distribution, and denotes the filtering radius. The smoothed material distribution is computed by solving Eq. (6) using FEM.
2.2.2 Multi-channel variational auto-encoder
Building upon the fundamental principles of the standard VAE, a MC-VAE adds an extra layer of complexity by simultaneously processing multiple channels of input data. The primary advantage of the MC-VAE lies in its capacity to capture the underlying relationships between different data channels, which would be challenging for separate SC-VAEs to accomplish.
Consider a set of samples where denotes normalized pixel data with the total number of pixels , and multiple channels of input data represented as , where is the total number of channels, and each corresponds to the data from the -th channel with each channel’s number of pixels . The total number of pixels accordingly results in . The first channel contains the material distribution for the sample , while the second and subsequent channels contain each HF seeding parameters’ vector . Fig. 3 depicts the concept of the MC-VAE’s data structure.
VAEs are generally structured around two principal neural networks: an encoder that transforms high-dimensional input data into latent variables , where is significantly smaller than , and a decoder that reconstructs high-dimensional output data from these latent variables. Within the MC-VAE framework, data from each channel is processed by its dedicated encoder network, yielding several sets of latent variables, namely . These sets are then concatenated to form a unified latent representation . The MC-VAE’s decoder network is engineered to utilize this unified latent representation to reconstruct the data for each channel, producing . Following encoding and decoding, the first channel of the decoded image is remapped back onto the original domain to represent the material distribution. The VAE generates new data through its decoder, facilitated by ensuring that the compressed data adheres to a Gaussian distribution within the latent space.

The fundamental idea of the VAE lies in finding a probability distribution from an approximated model , where are the parameters of the decoder to be optimized through learning process so that the likelihood of is maximized ultimately. In other words, the likelihood to generate data points similar to the input data is desired to be maximized during the iterations. However, it is hard to calculate the likelihood due to computational efficiency, so VAEs employ an approach known as variational inference, specifically leveraging the ‘variational lower bound.’ This approach lets us optimize the expression on the right side of the following inequality instead of maximizing the directly:
(7) |
where and respectively correspond to the encoder and decoder model where and are the parameters of the encoder and decoder, respectively. The latent variable, denoted by , is given by:
(8) |
where the mean and standard deviation of the approximate posterior, denoted as and , respectively, represent the encoder’s outputs for each individual sample. The symbol signifies element-wise multiplication, and denotes a random sample drawn from the standard normal distribution. Eq. (8) is commonly referred to as the ’reparameterization trick,’ a technique that facilitates gradient computation in probabilistic sampling within a neural network. It accomplishes this by breaking down the stochastic variable into two components: a deterministic element (such as the output of a neural network) and a sample originating from an independent noise source. It is important to note that the latent space captures features relevant to all channels while still preserving the characteristics of the traditional VAE.
The first term, , on the right-hand side of Eq. (7), represents the Kullback–Leibler (KL) divergence, used to quantify the disparity between the encoder’s distribution and a standard Gaussian distribution. Minimizing this term leads to an approximation of the encoder distributions that closely resemble Gaussian distributions. The KL divergence is calculated according to its definition, as follows:
(9) |
The second term in Eq. (7) represents the reconstruction error between the original input material distributions and the decoded probability distributions, computed as binary cross-entropy for each channel, as follows:
(10) | ||||
where I denotes the identity matrix. Finally, a loss function is computed by introducing a parameter to adjust the weight of the KL divergence, as shown below:
(11) |
where represents the weight parameter known as KL weight, which governs the influence of the KL divergence, encouraging the distribution in the latent space to approximate a standard normal distribution. A higher KL weight urges the model to prioritize aligning the latent space distribution closer to the standard normal distribution. The value of the KL weight impacts the trade-off between generative diversity and reconstruction capability of the model. As shown in Eq. (11), the reconstruction error is computed for each channel first, and then the errors from all the channels are summed.
2.3 Mutation
To prevent premature convergence and maintain dataset diversity throughout the entire design process, mutant material distributions for the first channel are generated by a mutation-like operation which Yaji et al. found effective in promoting more promising candidates. The desired mutation-like operation involves generating new material distributions that inherit topological features from the material distributions of reference samples in the target solution set under optimization, so that the mutant material distributions can survive the selection process more likely. This operation is achieved by solving the LF topology optimization problem defined in Eq. (2), incorporating an additional constraint function:
(12) |
where , and is a parameter controlling the discrepancy between a selected reference material distribution and the optimized material distribution for mutation. The concept behind this mutation process is detailed in the reference [22]. Notably, a smaller value of results in a more divergent material distribution, which inherits fewer topological features from the original material distribution. Careful consideration is required when determining the value of this parameter, as a highly divergent material distribution faces greater challenges in surviving during the selection process.
3 Formulation
3.1 Topology optimization
This section briefly explore the formulation of topology optimization, specific to the solid isotropic material with penalty (SIMP) method, one of the most widely used methods in topology optimization. To solve the optimization problem such as (2), the design variable field is discretized into a set of design variables for each element , where with denotes a predefined design domain, using Finite Element Method (FEM) in this study. The governing equations are formulated based on the SIMP method, detailed as follows:
(13) | ||||
where the stiffness matrix represents a system of stiffness equations for elements, with a displacement vector, denoted by , containing the displacements at each node or element, subject to applied loads represented by a force vector . Here, denotes an element stiffness matrix, and represents the modified Young’s modulus. and correspond to the Young’s modulus of solid and void materials, respectively. The parameter serves as a penalty factor that encourages the binarization of design variables. Furthermore, denotes a filtered and projected design variable, as elaborated below.
To mitigate the emergence of overly complex topologies during optimization, a filtering process is typically incorporated into topology optimization. The design variables are subjected to filtering using a method proposed by Bourdin [36] to smooth the design variables by averaging each one with its neighbors within a filtering radius , as shown below:
(14) | ||||
where represents a set of design variables located within a distance less than the filter radius from the position of each element . Additionally, a Heaviside projection is incorporated to remove the grayscale results and achieve simpler yet sharper topologies [37], as illustrated below:
(15) |
where represents a scaling parameter that determines the sharpness of the projection, and is a threshold value to determine whether the projected variable should be adjusted closer to zero or one. The design variable field is iteratively updated using the Method of Moving Asymptotes (MMA), a well-established technique of sequential convex programming method [38], based on the sensitivity of the objective functional against the design variable field, calculated using the chain rule.
3.2 Stiffness maximization
A bi-objective topology optimization to maximize stiffness and minimize volume is written as follows:
find | (16) | |||
that minimize | ||||
subject to |
where represents the volume of each element. The objective functional is defined as the compliance of the structure, or strain energy, calculated by , which can be considered as the inverse of stiffness. Notably, the material distribution derived from this formulation is entirely discrete by its nature, allowing for a more precise evaluation of objective functionals than methods involving relaxation.
The bi-objective optimization problem (16) is converted to a mono-objective optimization problem with a constraint of maximum volume , as follows:
find | (17) | |||
that minimize | ||||
subject to | ||||
where is transitioned from a discrete to a continuous variable. Solving this optimization problem (17) with various values of the maximum volume constraint allows us to generate a variety of material distribution candidates to be stored in the first channel of multi-channel images as the initial designs, combined with randomly assigned HF seeding parameters.
The material distributions derived from LF topology optimization or the first channel of the MC-VAE output undergo a process of smoothing to approximate more binary-like distributions, which are subsequently binarized using a density threshold . This binarization process ensures that the material distribution is clearly defined as either solid or void, with only the elements designated as solid being extruded to the height of the HF seeding parameters, stored in the second channel of the multi-channel images, to design the reinforcement structures.
3.3 Maximum stress minimization
A bi-objective topology optimization to minimize maximum stress and volume is written as follows:
find | (18) | |||
that minimize | ||||
subject to |
where is von Mises stress of each element.
This bi-objective optimization problem (18) is so-called a mini-max problem, which is not applicable to gradient-based optimization methods due to its non-differentiability. To allow for gradient-based optimization, a -norm function [39, 40] is employed to approximate the maximum stress within the design domain, transforming the original bi-objective optimization problem (18) into a mono-objective optimization problem, as follows:
find | (19) | |||
that minimize | ||||
subject to | ||||
where the stress field is modified to facilitate the binarization of design variables and mitigate numerical challenges encountered when optimizing design variables near material boundaries, where stress can change abruptly. This modification ensures that design variables transition smoothly, avoiding sharp increases as their values approach . The approximated stress, denoted by , converges to the actual maximum stress as approaches infinity. However, as increases, so does the computational complexity. Consequently, the value of should be carefully determined compromising the computational stability and evaluation accuracy.
Similar to the approach for stiffness maximization, material distributions are binarized, and HF models are constructed using the defined material boundaries and HF parameters. Subsequently, von Mises stress and volume are evaluated within these FE models. It is important to note that the maximum stress is determined directly from the von Mises stress distribution across the design domain, bypassing any relaxation or approximation strategies.
4 Numerical implementation
This section outlines the detailed implementation of the proposed framework. Building upon the core concepts previously discussed, a pseudo-code of the proposed framework is presented in Algorithm (1), tailored for bi-objective optimization problems such as those defined by Eq. (16) and Eq. (18). The parameters governing the entire process are detailed in Table 1. The LF seeding parameter and the HF seeding parameter are determined through Latin Hypercube Sampling (LHS) from the intervals and , respectively. LHS is a statistical technique designed to generate a diverse set of plausible parameter combinations from a multidimensional distribution, commonly employed in computer simulations to efficiently explore varied design spaces. The bi-objective functionals and are evaluated under the condition that the constraint functional is met for the dataset. The design iteration continues until the relative error of the hypervolume indicator falls below , or the iteration count reaches a predefined maximum . The reference point for hypervolume calculations, as per Eq. (4), is set to values marginally exceeding the highest initial objective values found on the Pareto front. A mutation-like process is conducted at intervals of iterations, solving the LF topology optimization for reference samples across variations of LF seeding parameters. Regarding the mutation constraint (12), the reference material distributions are uniformly selected from the Pareto front solutions denoted as . Among all the processes, HF evaluation demands the most computational resources, as it involves running numerical simulations for each sample in the temporary dataset at every iteration . To optimize computation time, HF evaluations (lines 9-11) were performed in parallel, leveraging the independence of each simulation from the others.
In the selection process, the number of offspring should be carefully determined due to its influence on the efficiency of both the entire optimization and MC-VAE’s learning performance. The MC-VAE’s learning performance is significantly influenced by both the network architecture and the volume of the input dataset, particularly when this dataset is relatively limited. A consistent offspring count for the MC-VAE’s input dataset promotes more stable learning outcomes because the MC-VAE’s network architecture remains unchanged throughout the optimization iterations. On the contrary, the more offspring count allow the MC-VAE to learn the characteristics of the input dataset more effectively. Therefore, we opt to determine the number of offspring by setting a minimum threshold and selecting all Pareto front points as offspring when their total number surpasses this minimum.
The framework outlined in Algorithm (1) was developed using Python (version 3.9.13) as the primary programming environment, incorporating a variety of tools for its implementation. The LF topology optimization and HF evaluation, along with the solution of Helmholtz PDEs, were executed in COMSOL Multiphysics (version 6.1), a commercial FEM software. In this study, COMSOL Multiphysics was controlled via scripts written in MATLAB (version 2023a). The MC-VAE was constructed using TensorFlow (version 2.12.0), an open-source machine learning framework renowned for its versatility in developing and deploying sophisticated machine learning models. DDM was accomplished with the aid of a mapping function from the IGL Python library (version 2.4.1). The traditional EA operations, such as selection and hypervolume calculations, were facilitated by DEAP (version 1.3.3), an open-source Python library designed for crafting and executing genetic algorithms and other evolutionary computing strategies.
Parameter | Symbol | Value |
---|---|---|
Number of LF seeding parameters | ||
Number of channels | ||
Maximum iterations | ||
Number of seeding parameters in mutation | ||
Number of mutant generation for each seeding parameter | ||
Interval of mutation | ||
Threshold value of hypervolume indicator | ||
Number of samples generated by MC-VAE |
4.1 Low-fidelity topology optimization
For both the stiffness maximization problem (17) and the maximum stress minimization problem (19), the raw design variable field , filtered design variable field and the displacement field are represented using shell elements that feature five intergal points: one at each corner of the quadrilateral and one at the center. In contrast, the projected design variable field are represented using Lagrange quadrilateral finite elements, which are suited for element-wise material distribution. The parameters for the LF topology optimization are detailed in Table 2.
The design variables are iteratively updated using the MMA, with a move limit set to for all LF optimization simulations. The sensitivity analysis of the objectives relative to the design variables employs a discrete adjoint method installed in COMSOL Multiphysics. The maximum number of iterations was set to for stiffness maximization and increased to for maximum stress minimization to accommodate the latter’s greater non-linearity and complexity. The in Eq. (19) was initially set to , and increased as every iterations based on the continuation method [41]. The penalty parameter for stress relaxation was set as .
Parameter | Symbol | Value |
---|---|---|
Filter radius | ||
Scaling factor of projection | ||
Threshold value of projection | ||
Young’s modules of solid material | ||
Young’s modules of void material | ||
Number of elements | ||
Penalty factor |
4.2 Multi-channel variational auto-encoder
Here, we explore the detailed implementation of MC-VAE, including its network architecture. Unlike the approach taken in previous research [22], which utilized a basic VAE architecture comprising a multilayer perceptron with two dense layers, we adopt CNNs, which is engineered specifically for processing and understanding visual data. Whereas dense layers flatten the input image’s pixels and assign independent weights to each pixel, convolutional layers apply a series of localized dot products across small sections of the input image using a filter. Furthermore, while the number of weights in a dense layer simply corresponds to the total pixel count of multi-channel images, CNNs significantly reduce the number of weights by utilizing a filter that matches the input image’s channel count and sliding this filter across all image pixels. Consequently, CNNs excel in capturing the spatial relationships within multi-channel data through their convolutional layers. This ability to extract spatial features enables VAEs to generate more meaningful and compact representations within the latent space, leading to enhanced image reconstruction during decoding. The powerful feature extraction capabilities of CNNs motivated us to implement MC-VAE using CNNs to more efficiently manage the growing complexity of input data, which now includes multiple channels.
The encoder of MC-VAE is composed of an input layer accepting pixels and two channels, two convolutional layers with and filters, respectively, a flattening layer, and a dense layer. The decoder consists of an input layer accepting a latent vector of size , a dense layer, and two deconvolutional layers with and filters, respectively. The convolutional and deconvolutional layers have a kernel size of and a stride of . The final layer applies a sigmoid activation function to ensure the output values are between and , whereas every convolutional layer but the final one applies a ReLu activation function. It is important that the output values for the first channel can be directly used as material distributions , but for the second channel, the values needs denormalization that inversely normalizes the output values from to . As the optimizer in training neural networks, we used Adam [42], an extension of the stochastic gradient descent method, which is widely used as the optimizer of neural networks including VAEs. The maximum epochs, the learning rate, and the mini-batch size are set as , , and , respectively. The KL weight for controlling the influence of the KL divergence is set as . The neural networks are learned until the epoch is reached to the maximum number of epochs or the loss function is not improved for 40 epochs in total.
To validate the implementation of the MC-VAE’s network architecture, the MC-VAE was trained on two distinct example datasets. In both datasets, the first channel contained identical material distributions, while the second channel held a scalar HF parameter for each sample. For one dataset, this scalar HF parameter was directly proportional to the volume fraction derived from the first channel; for the other, it was inversely proportional. The images generated from these datasets are displayed in Fig. 4. Here, solid materials in the first channel are color-coded to represent the HF parameter in the second channel, with black indicating void areas. Despite the relatively small size of each dataset, consisting of only samples —– significantly less than the typical dataset sizes for VAEs, which are on the order of –— the results, as shown in Fig. 4, demonstrate that the MC-VAE successfully captured both the topological features of the material distributions and the relationship between these features and the HF parameter. This is evident as samples with a higher volume in the proportional dataset correspondingly stored higher HF parameters in the second channel, and the inverse was true for the inversely proportional dataset. This effective learning from a small dataset can be attributed to the simplicity of the features in the input images and the straightforward relationship between the material distribution and the HF parameter. Essentially, the MC-VAE needed to grasp either a proportional or inversely proportional relationship between the two channels. As the number of channels increases, there may be a need to enlarge the input dataset or adjust the network architecture for more efficient learning. Furthermore, Fig. 4 suggests that the HF parameters in the second channel are represented not as scalar values but as distributions, even when the input dataset’s second channel contained scalar values. Therefore, when calculating HF models, if a scalar value for the HF parameter is required, it must be extracted from the decoded images as a scalar value through a mathematical computation like averaging. As a preliminary study, this research utilized scalar values for the HF parameter in each sample to assess the framework’s capability to identify promising samples within the HF modeling space.

The size of decoding samples is fixed at , while the size of input samples often exceeds , due to the implementation of a minimum offspring number and the inclusion of all Pareto front samples in the selection process. This excess raises the potential for overfitting within the MC-VAE, as the input sample size surpasses the initial dataset size used for network tuning. Nonetheless, minor overfitting does not critically impact the framework’s effectiveness, as MC-VAE’s primary role is to facilitate crossover by producing a multitude of samples that inherit target dataset’s features. These decoded samples, while bearing resemblance to the input dataset in terms of topological features and HF parameters, exhibit slight variations. It is important to note that the generation of samples with significantly different features is achieved through the mutation-like operation, as detailed in section 2.3.
A particular challenge for MC-VAE is maintaining sample diversity, especially when the input dataset is significantly imbalanced. The selection of solutions based on LF and HF parameters, coupled with the optimization problem’s nature, often results in a disproportionate representation of samples with extreme values for one objective and the inverse for the other. This imbalance is particularly pronounced for samples with medium values of both objectives, becoming more acute with each iteration and leading to an increasingly biased input dataset for MC-VAE.
To mitigate the iterative bias in the MC-VAE dataset, we employed a sampling strategy called oversampling [43], applied at the input stage of MC-VAE. Oversampling is utilized to rectify class imbalances within datasets by duplicating under-represented samples until a balance is achieved. This encourages MC-VAE within the data-driven MFTD framework to develop a more comprehensive understanding of less frequent data, thereby enhancing its capability to learn the input dataset’s characteristics more effectively maintaining the diversity.
4.3 High-fidelity evaluation
In the scenarios of both stiffness maximization and maximum stress minimization, the reinforcement structures are created by extruding the material distributions, thresholded by , to a height determined by the scalar HF parameter value for each sample. This approach allows each sample’s HF parameter field to contain a scalar value, facilitating the validation of the entire framework, as depicted in Fig. 5. It is important to note that the HF evaluation can potentially accommodate HF models with vector HF parameters that vary across the design domain, such as thickness or temperature distributions, owing to the adaptive learning capabilities of MC-VAE.
The displacement and stress fields are discretized using Lagrange tetrahedral finite elements. Significantly, the mesh configuration can differ between LF and HF models without issue, leveraging the multifidelity approach where LF optimization and HF evaluation are independently processed. Consequently, this study employs solid meshes for HF models to ensure detailed evaluations and shell meshes for LF models to conserve computational resources.

4.4 Mutation
This section explore the mutation-like operation tailored for multi-channel images, which encapsulate both material distributions and HF parameter fields. Following the generation of diverse multi-channel images via the MC-VAE, a subset of multi-channel images is earmarked for mutation. The concept of this multi-channel mutation-like operation adapted in this study is depicted in Fig. 6. Specifically, the material distribution within the first channel of a reference image is utilized as a constraint in the LF topology optimization process, as described by Eq. (12), with a discrepancy parameter set to . The rest of the parameters for this optimization mirror those of the standard LF topology optimization.
Upon completion of the LF topology optimization for mutation, the resultant material distribution is contained within the first channel of the mutated sample image as . For subsequent channels in the mutated samples, it is crucial that these distributions correlate with the primary material distribution to facilitate MC-VAE’s learning of inter-channel relationships. Consequently, the HF parameters in these channels are assigned as distributions obtained by multiplying the pixel values in the first channel and the predefined HF parameters.
As outlined in section 4.2, this study adopted a scalar value for the second channel to assess the framework’s efficacy. Thus, the HF parameter field in the second channel is assigned a distribution derived by multiplying a scalar HF parameter sampled via LHS and the pixel values in the first channel. To refine this approach, HF parameters in subsequent channels of mutated samples could be directly inherited from the corresponding channels of reference samples, enhancing the probability of these mutated samples surviving through the selection phase.

5 Results and discussion
The efficacy of the proposed framework is illustrated through two numerical examples. The initial example addresses a stiffness maximization problem, a cornerstone problem within structural design. This example seeks to demonstrate that the proposed framework can identify high-performance solutions by efficiently exploring the HF parameter space, alongside the traditional LF parameter space that delineates material distributions. It is presumed that the topology optimization for this straightforward problem can be directly resolved. For validation, the outcomes derived from the proposed framework are juxtaposed with those obtained from reference solutions solved with uniformly sampled LF and HF parameters.
The subsequent example tackles a maximum stress minimization problem, which inherently requires relaxation and approximation methods for stress to be solvable. Prior research has shown that the mini-max stress problem could be indirectly addressed through the data-driven MFTD, yet without specifically aiming to explore the HF parameter space for superior solutions [27]. Thus, this second example endeavors to demonstrate the effectiveness of the proposed data-driven MFTD in navigating both LF and HF parameter spaces for the maximum stress minimization problem to secure high-performance solutions. To confirm this, the stress optimization problem was solved using relaxation and approximation methods, followed by an evaluation of the objective values with HF models, bypassing relaxation and approximation. Furthermore, as an alternative LF optimization within the data-driven MFTD, the stiffness maximization problem was also solved, subsequently assessing the stress and volume objectives to gauge the impact of the LF optimization problem on the final solution’s performance.
5.1 Problem setup
In practical design scenarios, the target surface is often three-dimensional thin-walled structures with curvature, necessitating the capability of the framework to handle such structures. To facilitate this, a part of a quadric surface was selected for the target surface, as expressed:
(20) |
where was chosen to be .
The setup for LF optimization, including dimensions, loading conditions, and boundary conditions, is depicted in Fig. 7 a). The design domain is modeled as a cone surface with square-trimmed edges, where the width is defined by the reference length . A uniform edge load of is applied at the lower right corner in the negative direction. The Dirichlet boundary condition is applied to the left wall in the design domain . This setup is widely recognized as the cantilever beam problem, a prevalent benchmark for topology optimization, albeit with the design domain being a three-dimensional bending surface.
The HF configuration, particularly regarding loading and boundary conditions, is depicted in Fig.7 b). These configurations aim to extend the LF optimization settings for thin-walled shell models, as shown in Fig 7 a), to those suitable for extruded solid models. Consequently, the edge load in the LF configuration is broadened to an area load in the HF setup. Tetrahedral meshes for the skin and reinforcement structures are shown in Fig. 7 c), colored in blue and red, respectively. The mesh density is adjusted based on proximity to the boundary surfaces, enhancing the accuracy of stress evaluation in areas where stress is likely to concentrates.

5.2 Validation
5.2.1 Comparison with reference solutions
The material distributions derived from the LF optimization are depicted in Fig. 8 a), organized by the maximum volume constraint ranging from to . Samples with higher exhibit a thicker topology of material distribution. The central ribs, connecting the arch-shaped outer rib to the bottom rib, are notably thinner compared to the connecting ribs. These configurations are characteristic outcomes of the cantilever beam problem addressed through topology optimization, specifically using the SIMP method, albeit with slightly thicker and curved ribs at the cone’s apex. This curvature is attributed to the top of the cone being more prone to deformation without significant support, leading to a potential surge in the strain energy, represented by . Consequently, the results from the LF stiffness maximization align qualitatively with the anticipated outcomes.
The initial dataset was created by initializing the high-fidelity HF parameters using LHS for both and , as illustrated in Fig. 8 b). The samples are organized based on the assigned HF parameters. Fig. 8 b) demonstrates that the initial dataset comprehensively represents all categories of HF parameters, thanks to the even distribution achieved by LHS. This dataset was then prepared for input into the MC-VAE, excluding samples that did not meet the constraint, as material boundaries might shift during the smoothing phase.

To validate the proposed framework, reference solutions were obtained by solving the same stiffness maximization problem, and then extruded to heights uniformly sampled from the same range as the proposed framework’s HF parameters. The reference solutions were then evaluated using the HF model, and the objective values were compared with those from the proposed framework. Objective values derived from the final solutions of the proposed framework and those from the reference solutions are juxtaposed in Fig. 9. This figure indicates the effectiveness of the proposed framework in identifying high-performance solutions across broader objective spaces, as the proposed framework’s solutions with lower volume exhibit considerably superior performance compared to the reference solutions, with higher volume’s performance being equivalent. This implies that such a parametric study by uniformly sampled HF parameters can yield suboptimal performance without a comprehensive search for more global solutions through the exploration of enormous HF parameters.

The structural performance of the optimized model is compared with the reference model in Fig. 10 on the right and left, respectively. Although the volume of both models is same, as shown in Fig. 10 a), the optimized model’s reinforcement has a taller and thinner sectional shape without a middle bridge that the reference model has, leading to a lower strain energy than that of the reference model. This is evident in the stress and displacement fields, as shown in Fig. 10 b) and c), respectively. Both models exhibit similar displacement and stress distributions, where the displacement is significant at the top and bottom right corner and the stress concentrates on the top and bottom corners of design and the center of the design domain with the most curvature. This is because their topological features are nearly identical, however, the optimized model’s reinforcement is more efficiently supporting the structure with thinner sectional shape and larger height, resulting in the entirely lower stress and displacement fields than the reference model.

5.2.2 Effect of oversampling
The evolution of the hypervolume indicator from the stiffness maximization is depicted in Fig. 11. Hypervolume indicators were computed using , where the objective functions and represent strain energy and volume, respectively. The optimization process converged at the 24th iteration. This occurred when the relative error of the hypervolume indicator dropped below the threshold. The hypervolume indicator improved over iterations, indicating an expansion of the Pareto front towards the utopia point in the objective space. The improvements in the hypervolume indicator were . Notably, the convergence of these cases occurred significantly sooner, and the enhancements in the hypervolume indicator were more modest compared to the outcomes for the data-driven MFTD in previous studies [22], which tackled thermal-fluid problems characterized by greater non-linearity than the stiffness maximization problem. The earlier convergence and lesser improvement in the hypervolume indicator for stiffness maximization should be attributed to the relative simplicity of its optimization problem.

Fig.12 displays the denormalized images decoded from the MC-VAE at the a) initial and b) last iteration, where the images are sorted by the corresponding total mass values. At the last iteration, the simpler topologies with center ribs and no holes are more prevalent, however, as the total mass increases, the topologies feature more complex reinforcements with holes and ribs to support the entire structure more effectively. This trend is consistent with the height of reinforcement as well; as the total mass increases, the height of reinforcement increases, and the strain energy decreases. These trends validate that the MC-VAE effectively learns the physical relationship between the sectional shape and height of reinforcements, stored in the first and second channels, respectively.
Additionally, the material boundaries of the material distributions decoded at the last iteration are more distinct than those decoded at the initial iteration. This discrepancy arises because the dataset for the MC-VAE has no physical relationship between the material distribution and the HF parameter at the initial iteration, increasing the difficulty in effectively learning the relationship between the two channels.

5.3 Maximum stress minimization
Now we apply the proposed framework, validated through its application to a fundamental topology optimization problem, to the maximum stress minimization that involves strong non-linearity.
5.3.1 Comparison with conventional framework
To assess the capability of the proposed data-driven MFTD in identifying high-performance samples throughout the HF parameter space, the conventional data-driven MFTD approach was employed for maximum stress minimization using the same problem settings and HF parameters . The objective functionals derived from both the conventional data-driven MFTD with SC-VAE and the proposed data-driven MFTD with MC-VAE are illustrated in Fig. 13. The conventional data-driven MFTD with SC-VAE yielded samples that are partly positioned on the Pareto front for the MC-VAE case, whereas samples with smaller volumes and higher maximum stress are often positioned away from the Pareto front, indicating they are less effective. Predicting the placement of optimized samples within the objective space prior to applying the conventional data-driven MFTD with a singular HF parameter is challenging. Therefore, to achieve a Pareto front as comprehensive as that obtained with the proposed data-driven MFTD, extensive parametric studies involving various HF parameters would be necessary if relying solely on the conventional data-driven MFTD. This underscores the significant advantage of the proposed data-driven MFTD in globally searching for optima across material distributions and HF parameters. It is noteworthy that the conventional data-driven MFTD with yielded some samples outperforming those from the enhanced data-driven MFTD, suggesting that the latter could further enhance its capability to discover more promising solutions through a more global search across material distributions and HF parameters.

The structural performance of the optimized model obtained from LF maximum stress minimization and LF stiffness maximization, and the initial model from LF maximum stress minimization are juxtaposed in Fig. 14 on the right, middle and left, respectively. Although all models have the same volume of , as shown in Fig. 14 a), the initial model’s reinforcement has a more complex topology with partially narrow ribs and many holes, causing a higher stress concentration as shown in Fig. 14 b). The optimized model from both LF maximum stress minimization and LF stiffness maximization exhibit simpler topologies with uniformly thick ribs and fewer holes, leading to lower stress concentrations as shown in Fig. 14 b). Interestingly, altough the maximum stress is lower in the order of stress-based optimized, stiffness-based optimized, and stress-based initial model, the displacement is smaller in the order of stiffness-based optimized, stress-based initial, and stress-based optimized model as shown in Fig. 14 c). This discrepancy arises from the stress concentration depending on the topology of the reinforcement more strongly than the stiffness or deformation, clearly demonstrated by comparing the optimized models from LF stiffness maximization and LF maximum stress minimization, where the latter exhibits a lower maximum stress but a higher displacement. It is noteworthy that the stiffness-based model has the lower displacement than both stress-based initial and optimized models, indicating that the optimized model from LF stiffness maximization reduces the stress concentration by increasing the stiffness, as expected.

5.3.2 Effect of low-fidelity optimization formulation
In addition to the comparison with the conventional data-driven MFTD, the proposed data-driven MFTD was further investigated by comparing the the final optimized results from two different couplings of LF optimization and HF evaluation: LF stiffness maximization and HF stress minimization, and LF maximum stress minimization and HF stress evaluation. Initially, maximum stress minimization was approached through LF stiffness maximization coupled with HF stress evaluation. This strategy is anticipated to indirectly yield promising solutions characterized by low maximum stress and volume by identifying samples with reduced strain energy, indicative of higher stiffness. In a linearly elastic system, a structure with greater stiffness is expected to undergo less deformation under identical loading conditions, potentially leading to decreased stress levels within the structure, given constant applied forces. While LF optimization for stiffness maximization is an indirect method, it offers the benefit of generally lacking non-linearity and allows for the derivation of material distributions without the need for relaxation or approximation methods. In the second coupling, maximum stress minimization was tackled using LF approximated maximum stress minimization and HF stress evaluation, a more direct approach. This method is expected to achieve promising solutions less indirectly through the objective functional of stress approximated by a -norm, which, while still indirect, is more straightforward than stiffness maximization, where the objective functional is strain energy. A more direct LF optimization could potentially enable the samples that inherit the LF results to attain higher performance after iterative processes.
The material distributions resulting from the LF stiffness maximization and LF maximum stiffness minimization are depicted in Fig. 15, organized according to the maximum volume constraint ranging from to . The topology of these outcomes with LF stiffness maximization closely resembles those derived from the stiffness maximization, as illustrated in Fig.8. This resemblance is consistent with the slight distinction between two LF stiffness maximization problems, where the loading area is marginally broader for the maximum stress minimization problem.
Fig. 15 b) reveals a wide variety of topologies can be observed in the scenario with LF maximum stress minimization, even with results based on similar volume constraints showing significant differences. This diversity stems from the strong non-linearity inherent in the approximated maximum stress minimization problem, leading to a tendency for the optimization process to become ensnared in local optima. Furthermore, these local optima often feature small holes and pronounced steep curves, contributing to stress concentration. Such characteristics, arising from the approximated maximum stress minimization, present substantial opportunities to reduce the true objective functional, specifically, the original von Mises stress.

The evolution of the hypervolume indicator for the scenarios using LF stiffness maximization and LF maximum stress minimization is depicted in Fig. 16, following normalization against the initial hypervolume indicator from the LF maximum stress minimization scenario. The hypervolume indicators were computed using , with and representing maximum stress and volume, respectively. The optimization concluded at iteration for LF stiffness maximization and at iteration for LF maximum stress minimization, upon the relative error of the hypervolume indicator falling below the threshold . The improvement in the hypervolume indicator was noted as for LF stiffness maximization and for LF maximum stress minimization, relative to the initial value from LF maximum stress minimization. It is noteworthy that the initial hypervolume for the LF stiffness maximization scenario was higher than that for the LF maximum stress minimization. This discrepancy arises because LF maximum stress minimization yielded topologies that were qualitatively sound but included numerous stress-concentrating features due to its pronounced non-linearity. Once the iterative evolutionary process commenced, the hypervolume indicator for the LF maximum stress minimization scenario immediately exceeded that of the LF stiffness maximization scenario, improving the qualitatively sound topologies to push the Pareto front towards the utopia point in the objective space. Furthermore, MC-VAE facilitated this rapid advancement by generating samples characterized by fewer holes and smoother material boundaries, effectively blending the features of all samples in the dataset.

6 Conclusions
We proposed a data-driven MFTD framework that expands the search space for HF parameters, previously constrained to a single constant value across the whole optimization, aiming to reduce user dependency on finding optimal solutions. This framework leverages a multi-channel image representation of material distributions and HF parameters, enabling the exploration of both material distributions and HF parameters in a single optimization process. The effectiveness of this framework is demonstrated through two numerical examples: stiffness maximization and maximum stress minimization problems.
The first example for stiffness maximization revealed that the proposed framework can successfully identify promising solutions with structural performance that is superior to that of the reference solutions by efficiently exploring both material distributions and HF parameters. Additionally, within the data-driven MFTD’s crossover process for multi-channel images, the implementation of an oversampling technique significantly improved the overall search performance by enriching the dataset that includes samples with underrepresented HF parameters.
Within the second example, the proposed framework demonstrated its ability to identify promising solutions for maximum stress minimization problems, even in the presence of significant non-linearity. It showed a distinct advantage in globally searching for optima across material distribution and HF parameter spaces compared to the traditional data-driven MFTD method employing a SC-VAE. The proposed framework significantly reduces computational demands and user reliance on finding optimal solutions by enabling a more comprehensive optimization in a single run, unlike traditional methods that require extensive parametric studies with varying HF parameters.
Additionally, solutions for the maximum stress minimization were derived through two distinct combinations of LF optimization and HF evaluation, where one is more straightforward and the other is more indirect yet widely utilized in mechanical engineering practices. The latter led to more promising solutions by refining the initial dataset derived from a more direct LF topology optimization approach, which suggests that the synergy between LF topology optimization and HF evaluation critically influences the search efficiency for optimal solutions.
The HF parameter was set as a constant value across the design domain for simplicity in this study, which theoretically can be expanded to a distribution across the design domain, based on the MC-VAE’s capability to process the input data stored as pixel values in multi-channel images. We plan to address the concurrent optimization of thickness and material distributions in our future work, utilizing its further potential to optimize a distribution of HF parameters in addition to conventional material distributions.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
This work was supported by JSPS KAKENHI, Grant Number 21J22284 and 23H03799.
Appendix A Design Domain Mapping
Let represent the 3D position of the -th node within a mesh patch , and denote the 2D position (parameter value) of the corresponding node within the 2D unit plane mesh , as illustrated in Fig. 17. The boundaries of the original mesh are divided into , corresponding to the boundaries of the unit plane mesh denoted as , respectively.

The boundary conditions of the unit plane mesh are divided as follows:
(21) | ||||
Then the mapping is computed by solving the following Laplace equation in the original mesh:
(22) | ||||
under the following Dirichlet boundary conditions:
(23) | ||||
Herein, denotes the arc length ratio of and , where denotes the boundary from the connection point of and to a point on . Solving the Laplace equation indicates the mapping minimizes the Dirichlet energy for a smooth map from a differential surface patch to its image , defined as follows:
(24) |
where means the unit mesh ’s gradient. This projection distorts the mesh to accurately represent the surface’s three-dimensional contours on a two-dimensional plane, ensuring all mesh elements fit within a unit square boundary. It can be said that DDM is formulated as a harmonic mapping [44], which minimizes distortion in the mapping by mapping each node on the surface so that the Dirichlet energy is minimized given fixed boundary conditions. Harmonic mapping can smoothly map the interior of the surface while maintaining specified boundary conditions.
Harmonic mapping is equivalent to conformal mapping [45] when applied under Dirichlet boundary conditions that secure all boundaries within the design domain. The primary goal of conformal mapping is to maintain local angles throughout the mapping process by minimizing the conformal energy, which includes the Dirichlet energy as defined in Eq. (24), alongside the area of the design domain. The conformal mapping has been widely used in topology optimization to realize the topology optimization problem on complex surfaces [46, 47, 48]. Given that the design domain’s area remains constant under fixed boundary conditions, minimizing conformal energy effectively reduces the Dirichlet energy, thereby decreasing distortion in the mapping. This rationale underpins the selection of the conformal mapping algorithm, favored for its straightforward implementation in Python, the primary development environment for this study. The implementation of the conformal mapping function leverages the IGL Python library based on several works [49, 50, 51].
References
- [1] M. P. Bendsøe, N. Kikuchi, Generating optimal topologies in structural design using a homogenization method, Computer Methods in Applied Mechanics and Engineering 71 (1988) 197–224.
- [2] O. Sigmund, K. Maute, Topology optimization approaches, Structural and Multidisciplinary Optimization 48 (2013) 1031–1055.
- [3] J. D. Deaton, R. D. Grandhi, A survey of structural and multidisciplinary continuum topology optimization: post 2000, Structural and Multidisciplinary Optimization 49 (2014) 1–38.
- [4] J. Zhu, W. Zhang, L. Xia, Topology optimization in aircraft and aerospace structures design, Archives of Computational Methods in Engineering 23 (2016) 595–622.
- [5] C. Le, J. Norato, T. Bruns, C. Ha, D. Tortorelli, Stress-based topology optimization for continua, Structural and Multidisciplinary Optimization 41 (2010) 605–620.
- [6] E. Holmberg, B. Torstenfelt, A. Klarbring, Stress constrained topology optimization, Structural and Multidisciplinary Optimization 48 (2013) 33–47.
- [7] P. Duysinx, M. P. Bendsøe, Topology optimization of continuum structures with local stress constraints, International Journal for Numerical Methods in Engineering 43 (8) (1998) 1453–1478.
- [8] C. A. C. Coello, G. B. Lamont, D. A. Van Veldhuizen, Evolutionary Algorithms for Solving Multi-Objective Problems, Springer, 2007.
- [9] C. Y. Hu, K. Y. Tseng, Topology optimization of structures using modified binary differential evolution, Structural and Multidisciplinary Optimization 42 (2010).
- [10] J. F. A. Madeira, H. L. Pina, H. C. Rodrigues, Ga topology optimization using random keys for tree encoding of structures, Structural and Multidisciplinary Optimization 40 (2010).
- [11] D. J. Munk, G. A. Vio, G. P. Steven, Topology and shape optimization methods using evolutionary algorithms: a review, Structural and Multidisciplinary Optimization 52 (2015) 613–631.
- [12] O. Sigmund, On the usefulness of non-gradient approaches in topology optimization, Structural and Multidisciplinary Optimization 43 (2011) 589–596.
- [13] D. Foster, Generative Deep Learning: Teaching Machines to Paint, Write, and Play, O’Reilly Media, 2019.
- [14] L. Regenwetter, A. H. Nobari, F. Ahmed, Deep generative models in engineering design: A review, Journal of Mechanical Design 144 (7) (2022) 071704.
- [15] D. Lee, W. W. Chen, L. Wang, Y.-C. Chan, W. Chen, Data-driven design for metamaterials and multiscale systems: A review, Advanced Materials 36 (8) (2024) 2305254.
- [16] D. P. Kingma, M. Welling, Auto-encoding variational bayes, arXiv preprint (2013).
- [17] T. Guo, D. J. Lohan, R. Cang, M. Y. Ren, J. T. Allison, An Indirect Design Representation for Topology Optimization Using Variational Autoencoder and Style Transfer.
- [18] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial networks, Communications of the ACM 63 (11) (2020).
- [19] S. Oh, Y. Jung, S. Kim, I. Lee, N. Kang, Deep generative design: Integration of topology optimization and generative models, Journal of Mechanical Design 141 (11) (2019) 111405.
- [20] L. Wang, S. Tao, P. Zhu, W. Chen, Data-driven topology optimization with multiclass microstructures using latent variable gaussian process, Journal of Mechanical Design 143 (3) (2020) 031708.
- [21] S. Yamasaki, K. Yaji, K. Fujita, Data-driven topology design using a deep generative model, Structural and Multidisciplinary Optimization 64 (3) (2021) 1401–1420.
- [22] K. Yaji, S. Yamasaki, K. Fujita, Data-driven multifidelity topology design using a deep generative model: Application to forced convection heat transfer problems, Computer methods in applied mechanics and engineering 388 (114284) (2022).
- [23] A. I. J. Forrester, A. Sobester, A. J. Keane, Multi-fidelity optimization via surrogate modelling, in: Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 463, The Royal Society, 2007, pp. 3251–3269.
- [24] K. Yaji, S. Yamasaki, S. Tsushima, K. Fujita, A framework of multi-fidelity topology design and its application to optimum design of flow fields in battery systems, in: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 59186, American Society of Mechanical Engineers, 2019, p. V02AT03A059.
- [25] K. Yaji, S. Yamasaki, K. Fujita, Multifidelity design guided by topology optimization, Structural and Multidisciplinary Optimization 61 (3) (2020) 1071–1085.
- [26] H. Kobayashi, K. Yaji, S. Yamasaki, K. Fujita, Freeform winglet design of fin-and-tube heat exchangers guided by topology optimization, Applied Thermal Engineering 161 (114020) (2019).
- [27] M. Kato, T. Kii, K. Yaji, K. Fujita, Tackling an exact maximum stress minimization problem with gradient-free topology optimization incorporating a deep generative model, in: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, American Society of Mechanical Engineering, Boston, Massachusetts, USA, 2023.
- [28] T. Kii, K. Yaji, K. Fujita, Z. Sha, C. Conner Seepersad, Latent crossover for data-driven multifidelity topology design, Journal of Mechanical Design 146 (5) (2024) 051713.
- [29] S. Tsutsui, M. Yamamura, T. Higuchi, Multi-parent recombination with simplex crossover in real coded genetic algorithms, in: Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1999, pp. 657–664.
- [30] J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, T. Chen, Recent advances in convolutional neural networks, Pattern Recognition 77 (2018) 354–377.
- [31] L. Antelmi, N. Ayache, P. Robert, M. Lorenzi, Sparse multi-channel variational autoencoder for the joint analysis of heterogeneous data, in: Proceedings of the 36th International Conference on Machine Learning, Vol. 97, 2019, pp. 302–311.
- [32] K. Shang, H. Ishibuchi, L. He, L. M. Pang, A survey on the hypervolume indicator in evolutionary multiobjective optimization, IEEE Transactions on Evolutionary Computation 25 (1) (2020) 1–20.
- [33] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation 6 (2) (2002) 182–197.
- [34] S. Yamasaki, K. Yaji, K. Fujita, Knowledge discovery in databases for determining formulation in topology optimization, Structural and Multidisciplinary Optimization 59 (2018) 595–611.
- [35] B. S. Lazarov, O. Sigmund, Filters in topology optimization based on helmholtz-type differential equations, International Journal for Numerical Methods in Engineering 86 (2011) 765–781.
- [36] B. Bourdin, Filters in topology optimization, International Journal for Numerical Methods in Engineering 50 (9) (2001) 2143–2158.
- [37] F. Wang, B. S. Lazarov, O. Sigmund, On projection methods, convergence and robust formulations in topology optimization, Structural and Multidisciplinary Optimization 43 (2011) 767–784.
- [38] K. Svanberg, The method of moving asymptotes - a new method for structural optimization, International Journal for Numerical Methods in Engineering 24 (2) (1987) 359–373.
- [39] R. J. Yang, C. J. Chen, Stress-based topology optimization, Structural optimization 12 (1996) 98–105.
- [40] P. Duysinx, O. Sigmund, New developments in handling stress constraints in optimal material distribution, in: Proceedings of the 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, American Institute of Aeronautics and Astronautics, 1998.
- [41] L. Li, K. Khandelwal, Volume preserving projection filters and continuation methods in topology optimization, Engineering Structures 85 (2015) 144–161.
- [42] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint (2014).
- [43] C. X. Ling, C. Li, Data mining for direct marketing: Problems and solutions, in: Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining, 1998, pp. 73–79.
- [44] J. Eells, J. H. Sampson, Harmonic mappings of riemannian manifolds, American Journal of Mathematics 86 (1) (1964) 109–160.
- [45] X. D. Gu, W. Zeng, F. Luo, S. T. Yau, Numerical computation of surface conformal mappings, Computational Methods and Function Theory 11 (2012) 747–787.
- [46] P. Vogiatzis, M. Ma, S. Chen, X. D. Gu, Computational design and additive manufacturing of periodic conformal metasurfaces by synthesizing topology optimization with conformal mapping, Computer Methods in Applied Mechanics and Engineering 328 (2018) 477–497.
- [47] W. Huo, C. Liu, Z. Du, X. Jiang, Z. Liu, X. Guo, Topology optimization on complex surfaces based on the moving morphable component method and computational conformal mapping, Journal of Applied Mechanics 89 (5) (2022) 051008.
- [48] Q. Ye, Y. Guo, S. Chen, N. Lei, X. D. Gu, Topology optimization of conformal structures on manifolds using extended level set methods (x-lsm) and conformal geometry theory, Computer Methods in Applied Mechanics and Engineering 344 (2019) 164–185.
- [49] M. Desbrun, M. Meyer, P. Alliez, Intrinsic parameterizations of surface meshes, Computer Graphics Forum 21 (3) (2002) 209–218.
- [50] L. Bruno, S. Petitjean, N. Ray, J. Maillot, Least squares conformal maps for automatic texture atlas generation, ACM Transactions on Graphics 21 (3) (2002) 362–371.
- [51] P. Mullen, Y. Tong, P. Alliez, M. Desbrun, Spectral conformal parameterization, Computer graphics forum 27 (5) (2008) 1487–1494.