Physics Successfully Implements Lagrange Multiplier Optimization
)
Abstract
Optimization is a major part of human effort. While being mathematical, optimization is also built into physics. For example, physics has the principle of Least Action, the principle of Minimum Entropy Generation, and the Variational Principle. Physics also has physical annealing which, of course, preceded computational Simulated Annealing. Physics has the Adiabatic Principle which in its quantum form is called Quantum Annealing. Thus, physical machines can solve the mathematical problem of optimization, including constraints.
Binary constraints can be built into the physical optimization. In that case, the machines are digital in the same sense that a flip-flop is digital. A wide variety of machines have had recent success at optimizing the Ising magnetic energy. We demonstrate in this paper that almost all those machines perform optimization according to the Principle of Minimum Entropy Generation as put forth by Onsager. Further, we show that this optimization is in fact equivalent to Lagrange multiplier optimization for constrained problems. We find that the physical gain coefficients which drive those systems actually play the role of the corresponding Lagrange Multipliers.
1 Introduction
Optimization is ubiquitous in today’s world. Everyday applications of optimization range from aerodynamic design of vehicles by fluid mechanics and physical stress optimization of bridges in civil engineering to scheduling of airline crews and routing of delivery trucks in operations research. Furthermore, optimization is also indispensable in machine learning, reinforcement learning, computer vision, and speech processing. Given the preponderance of massive datasets and computations today, there has been a surge of activity in the design of hardware accelerators for neural network training and inference [1].
We ask whether Physics can address optimization? There are a number of promising physical principles that drive dynamical systems toward an extremum. These are: the principle of Least Action, the principle of Minimum Entropy Generation, and the Variational Principle. Physics also has actual physical annealing which preceded computational Simulated Annealing. Further, Physics has the Adiabatic Principle which in its quantum form is called Quantum Annealing.
In due course, we may learn how to use each of these principles to perform optimization. Let us consider the principle of minimum entropy generation in dissipative physical systems, such as resistive electrical circuits. It was shown by Onsager [2] that the equations of linear systems, like resistor networks, can be re-expressed as the minimization principle of a function for currents in various branches of the resistor network. For a resistor network, the function contains the power dissipation, or entropy generation. By re-expressing a merit function in terms of power dissipation, the circuit itself will find the minimum of the merit function, or minimum power dissipation. Optimization is generally accompanied by constraints. For example, perhaps the constraint is that the final answers must be restricted to be . Such a digitally constrained optimization produces answers compatible with any digital computer.
There has been a series of machines created in the physics and engineering community to devise physics-based engines for the Ising problem. The Ising challenge is to find the minimum energy configuration of a large set of magnets. It’s a very hard problem even when the magnets are restricted to only two orientations, North pole up or down [3]. Our main insight in this paper is that most of these Ising solvers use hardware based on the Principle of Minimum Entropy generation. The natural evolution of these machines is toward a good, low-power-dissipation final state. Further, almost all of them implement the well-known Lagrange Multipliers method for constrained optimization.
An early work was by Yamamoto et al. in [4] and this was followed by further work from their group [5], [6], [7], [8], [9], [10], [11] and other groups [12], [13], [14], [15], [16], [17], [18], [19], [20]. These entropy generating machines range from coupled optical parametric oscillators, to RLC electrical circuits, to coupled exciton-polaritons, and silicon photonic coupled arrays. These types of machines have the advantage that they solve digital problems in the analog domain, which can be orders-of-magnitude faster and more energy-efficient than conventional digital chips that are limited by latency and the energy cost [11].
Within the framework of these dissipative machines, constraints can be readily included. In effect, these machines perform constrained optimization equivalent to the technique of Lagrange multipliers. We illustrate this connection by surveying 7 published physically distinct machines and showing that each minimizes entropy generation in its own way, subject to constraints; in fact, they perform Lagrange multiplier optimization. We note here that the systems in [11], [17], and [18] follow their own dynamics and are not related to the method of Lagrange multipliers. The system in [11] will be discussed in Section 5 in the main text while the work in [17] will be discussed in Appendix D.
In effect, physical machines perform local steepest descent in the entropy generation rate. They can become stuck in local optima. At the very least, they perform a rapid search for local optima, thus reducing the search space for the global optimum. These machines are also adaptable toward searching in an expanded phase space, and other techniques for approaching a global optimum.
The paper is organized as follows. In Section 2, we recognize that physics performs optimization through its variational and optimization principles. Then, we concentrate on the principle of minimum entropy generation or minimum power dissipation. In Section 3, we give an overview of the minimum entropy generation optimization solvers in the literature and show how they incorporate constraints. Section 4 has a quick tutorial on the method of Lagrange Multipliers. Section 5 studies five published solvers in detail and shows that they all follow some form of Lagrange multiplier dynamics. In Section 6, we look at those published physics-based solvers which are less obviously connected to Lagrange multipliers. Section 7 presents the applications of physics-based solvers to perform linear regression in statistics. Finally, in Section 8, we conclude and discuss the consequences of this ability to implement physics-based Lagrange multiplier optimization for areas such as machine learning.
2 Optimization in Physics
We survey the minimization principles of physics and the important optimization algorithms derived from them. These physical optimization machines are intended to converge to optima that are ‘agnostic to initial conditions’. By ‘agnostic to initial conditions’, we mean systems that converge to the global optimum, or a good local optimum, irrespective of the initial point for the search.
2.1 Physics Principles and Algorithms
2.1.1 The principle of Least Action
The principle of Least Action is the most fundamental principle in physics. Newton’s Laws of Mechanics, Maxwell’s Equations of Electromagnetism, Schrödinger’s equation in Quantum Mechanics, and Quantum Field Theory can all be interpreted as minimizing a quantity called action. For the special case of light propagation, this reduces to the principle of Least Time, as shown in Fig. 1.
A conservative system without friction or losses evolves according to the principle of Least Action. The fundamental equations of physics are reversible. A consequence of this reversibility is the Liouville Theorem which states that volumes in phase space are left unchanged as the system evolves.
Contrary-wise, in both a computer and an optimization solver, the goal is to have a specific solution with less uncertainty or a smaller zone in phase space than the initial state, an entropy cost first specified by Landauer and Bennett. Thus, some degree of irreversibility, or energy cost, is needed, specified by the number of digits in the answer in the Landauer/Bennett analysis. An algorithm has to be designed and programmed into the reversible system (either via software or via hardcoding of hardware) to effect the reduction in entropy needed to solve the optimization problem. Coming up with a fast algorithm for NP-hard problems is still an open problem in the field of reversible computing, which includes quantum computing.
This would require an energy cost, but not necessarily a requirement for continuous power dissipation. We look forward to computer science breakthroughs that would allow the principle of Least Action to address unsolved problems. An alternative approach to computing would involve physical systems that continuously dissipate power, aiding in the contraction of phase space toward a final solution. While exceeding the Landauer limit, such systems might have the advantage of speed and simplicity. This brings us to the principle of Least Power dissipation.

2.1.2 The principle of Least Power Dissipation
If we consider systems that continuously dissipate power, we are led to a second optimization principle in physics, the principle of Least Entropy Production or Least Power Dissipation. This principle states that any physical system will evolve into a steady-state configuration that minimizes the rate of entropy production given the constraints (such as fixed thermodynamic forces, voltage sources, or input power) that are imposed on the system. An early version of this statement is provided by Onsager in his celebrated papers on the reciprocal relations [2]. This was followed by further foundational work on this principle by Prigogine, [21], and de Groot, [22]. This principle is readily seen in action in electrical circuits and is illustrated in Fig. 2. We shall frequently use this principle, as formulated by Onsager, in the rest of the paper.

2.1.3 Physical Annealing; Energy Minimization
This technique is widely used in materials science and metallurgy and involves the slow cooling of a system starting from a high-temperature. As the cooling proceeds, the system tries to maintain thermodynamic equilibrium by reorganizing itself into the lowest energy minimum in its phase space. Energy fluctuations due to finite temperatures help the system escape from local optima. This procedure leads to global optima when the temperature reaches zero in theory but the temperature has to be lowered prohibitively slowly for this to happen.

2.1.4 Adiabatic Method
The Adiabatic Method involves the slow transformation of a system from initial conditions that are easily constructed to final conditions that capture the difficult problem at hand.
More specifically, to solve the Ising problem using this algorithm, one initializes the system of spins in the ground state of a simple Hamiltonian and then slowly varies the parameters of the Hamiltonian to end up with the final Ising Hamiltonian of interest. If the parameters are varied slowly enough, the system ends up in the ground state of the final Hamiltonian and the problem gets solved. In a quantum mechanical system, this is sometimes called ‘quantum annealing’. Several proposals and demonstrations, including the well-known D-Wave machine [23], utilize this algorithm.
The slow rate of variation of the Hamiltonian parameters that is required for this method to work is determined by the minimum energy spacing between the instantaneous ground state and the instantaneous first excited state that occurs as we move from the initial Hamiltonian to the final Hamiltonian. The smaller the gap is for a particular variation schedule, the slower the rate at which we need to perform the variation to successfully solve the problem. It has been shown that the gap can get exponentially small in the worst case, implying that this algorithm can take exponential time in the worst case for NP-hard problems.

2.1.5 Minimum Entropy Generation in Multi-Oscillator Arrays
Multi-Oscillator Arrays, subject to Parametric Gain were introduced in [4] and [7] for solving Ising problems. This can be regarded as a subset of the Principle of Minimum Entropy Generation, which is always subject to a non-zero input power constraint. In this case, gain acts as a boundary condition or constraint for the principle of minimum entropy generation, and the oscillator array must arrange itself to dissipate the least power subject to that constraint. In the context of a multi-coupled-oscillator arrays with gain, a certain oscillator mode will have the least loss. That mode will grow in amplitude most rapidly. This least loss mode can be individually selected by increasing the gain until it matches the intrinsic loss of that mode. Then further nonlinear evolution amongst all the modes will occur. If the oscillator array is bistable, as is the case for parametric gain which drives oscillation along the real axis, this becomes the analog of magnetic bistability in an Ising problem. Then, we seek a solution for the lowest magnetic energy state with some oscillators locked at zero phase shift and others locked at -phase shift. This mechanism will be the main point of Section 3.
3 Coupled Multi-Oscillator Array Ising Solvers
The motivation for ‘Coupled Multi-Oscillator Array Ising Solvers’ is best explained using concepts from laser physics. As a laser is slowly being turned on, spontaneous emission from the laser gain medium couples into the various cavity modes and begins to become amplified. The different optical modes in the cavity have different loss coefficients due to their differing spatial profiles. As the laser pump/gain increases, the cavity mode with the least loss grows faster than the other modes. Once the gain reaches the threshold gain, then further nonlinear evolution amongst all the modes will occur.
The design of the Coupled Multi-Oscillator Array Ising machines tries to map the power losses of the optimization machine to the magnetic energies of various states of the Ising problem. If the mapping is correct, the lowest power configuration will match the energetic ground state of the Ising problem. This is illustrated in Fig. 5. In effect, this is an example of a system evolving toward a state of minimum entropy generation, or minimum power dissipation, subject to the constraint of gain being present.

The archetypal solver in this class consists of a network of interconnected oscillators, driven by phase-dependent parametric gain. Parametric gain amplifies only the cosine quadrature and causes the electric field to lie along the Real axis in the complex plane. The phase of the electric field (0 or ) can be used to represent spin in the Ising problem. The resistive interconnections between the oscillators are designed to favor ferromagnetic or anti-ferromagnetic ‘spin-spin’ interactions by the Principle of Minimum Entropy Generation, subject to the constraint of parametric (phase-dependent) gain as the power input. The parametric gain favors oscillation along the real axis of the complex plane, where the positive real axis would correspond to spin up, and the negative real axis would correspond to spin down.
The Voltage (or Current) input constraint is very important to the Principle of Minimum Entropy Generation. If there were no power input constraint, all the currents and voltages would be zero, and the minimum power dissipated would be zero. In the case of the Coupled Multi-Oscillator circuit, the power input is produced through a gain mechanism, or a gain module. The constraint could be the voltage input to the gain module. But if the gain were to be too small, it might not exceed the corresponding circuit losses, and the current and voltage would remain near zero. Thus, there is usually a threshold gain requirement, when applying the Principle of Minimum Entropy Generation to the Coupled Multi-Oscillator circuit.
If the gain is insufficient, the circuit will achieve minimum entropy generation at negligible currents and voltages. The intrinsic losses in the network would dominate and the circuit currents and voltages would be near zero. If the pump gain is then gradually ramped up the oscillatory mode requiring the least threshold gain begins oscillating. Upon reaching the threshold gain, a non-trivial current distribution built on the bistability of the Couple Multi-Oscillator circuit will emerge. As the gain exceeds the required threshold, there will be further nonlinear evolution among the modes so as minimize power dissipation. The final state “spin” configuration, dissipating the lowest power, (or entropy generation) emerges as the sought-for optimum.
Ideally, the gain evolution will be controlled by the Lagrange function to find the local minimum power dissipation configuration as will be discussed in Section 5.5. With Minimum Entropy Generation, as with most optimization schemes, it is difficult to guarantee a global optimum.
In optimization, each constraint contributes a Lagrange multiplier. We will show that the gains of the oscillators are the Lagrange multipliers of the constrained system. In the next section, we provide a brief tutorial on Lagrange Multiplier optimization.
4 Lagrange Multiplier Optimization Tutorial
The method of Lagrange multipliers is a very well-known procedure for solving constrained optimization problems. In constrained optimization, the optimal point in multi-dimensional solution space locally optimizes the merit function subject to the constraint . The optimal point has the property that the slope of the merit function is zero as infinitesimal steps are taken away from , as taught in calculus. But these deviations are restricted to the constraint curve, as shown in Fig. 6. The iso-contours of the function increase until they are limited by, and just touch, the constraint curve at the point .

At the point of touching, , the gradient of and the gradient of are parallel to each other. This can be stated formally as:
The proportionality constant is called the Lagrange multiplier corresponding to the constraint .
Although Fig. 6 shows only a 2-dimensional optimization space, in general let us take the optimization space to be -dimensional. When we have multiple constraints , we correspondingly expand the -dimensional Lagrange Multiplier requirement:
where the gradient vector represents -equations, accompanied by the constraint equations , resulting in equations. These equations solve for the components in the vector , and the unknown Lagrange Multipliers . That would be equations for unknowns.
Motivated by the above condition, let us introduce a Lagrange function defined as follows:
which can be optimized by gradient descent or other methods to solve for and . The full theory of Lagrange multipliers, and the popular ‘Augmented Lagrangian Method of Multipliers’ algorithm used to solve for locally optimal , are discussed in great detail in [24] and [25]. A gist of the main points is presented in Appendices A, B, C.
For the specific case of the Ising problem, the objective function is given by , where is the magnetic Ising Energy and is the -th magnetic moment vector. For the optimization method represented in this paper, we need a circuit or other physical system whose power dissipation is also , but now is power dissipation, not energy, is a variable that represents voltage, or current or electric field, and the are not magnetic energy, but rather resistive coupling elements. The correspondence is between magnetic spins quantized along the z-axis, , and the circuit variable .
While ‘energy’ as opposed to ‘power dissipation’ are represented by different units, nonetheless we need to establish a correspondence between them. For every optimization problem, there is a challenge of finding a physical system whose power dissipation function represents the desired equivalent optimization function.
If the Ising problem has spins, there are also constraints, one for each of the spins. A sufficient constraint is: . More complicated nonlinear constraints can be envisioned, but could represent the first two terms in a more complicated constraint Taylor expansion.
Therefore, a sufficient Lagrange function for the Ising problem, with digital constraints, is given by:
where is the Lagrange Multiplier associated with the corresponding constraint. We shall see in the next section that most analog algorithms that have been proposed for the Ising problem in the literature, actually tend to optimize some version of the above Lagrange function.
5 The Physical Ising Solvers
In this section, we discuss each physical procedure proposed in the literature and show how each physical scheme implements the method of Lagrange multipliers. They all obtain good performance on the Gset benchmark problem set [26], and many of them demonstrate that they perform better than the heuristic algorithm, Breakout Local Search [27]. The main result of our work is the realization that the pump gains used in all the physical methods are in fact Lagrange multipliers.
The available physical solvers in the literature are as follows:
-
1.
Optical Parametric Oscillators
-
2.
Coupled Radio Oscillators on the Real Axis
-
3.
Coupled laser cavities using multicore fibers
-
4.
Coupled Radio Oscillators on the Unit Circle
-
5.
Coupled polariton condensates
Then there are a number of schemes that also rely upon a variant of minimum entropy production or power dissipation:
-
6.
Iterative Analog Matrix Multipliers
-
7.
Leleu Mathematical Ising Solver
In Appendix D, there is scheme that appears unconnected with minimum entropy production rate:
-
8.
Adiabatic coupled radio oscillators (Toshiba)
We shall see that methods 1, 2, 4, in the literature use only one gain for all the oscillators which is equivalent to imposing only one constraint. The other methods, 3, 5, 6, use different gains for each spin and correctly capture the full constraints of the Ising problem.
5.1 Optical Parametric Oscillators
5.1.1 Overview
An early optical machine for solving the Ising problem was presented by Yamamoto et al. [4] and [8]. Their system consists of several pulses of light circulating in an optical fiber loop, with the phase of each light pulse representing an Ising spin. In parametric oscillators, gain occurs at half the pump frequency. If the gain overcomes the intrinsic losses of the fiber, the optical pulse builds up. Parametric amplification provides phase dependent gain. It restricts the oscillatory phase to the Real Axis of the complex plane. This leads to bistability along the positive or negative real axis, allowing the optical pulses to mimic the bistability of magnets.
In the Ising problem, there is magnetic coupling between spins. The corresponding coupling between optical pulses is achieved by controlled optical interactions between the pulses. In Yamamoto’s approach, one pulse is first plucked out by an optical gate, amplitude modulated by the proper connection weight specified in the Ising Hamiltonian, and then reinjected and superposed onto the other optical pulse , producing constructive or destructive interference, representing ferromagnetic or anti-ferromagnetic coupling.
By providing saturation to the pulse amplitudes, the optical pulses will finally settle down, each to one of the two bistable states. We will find that the pulse amplitude configuration evolves exactly according to the Principle of Minimum Entropy Generation. If the magnetic dipole solutions in the Ising problem are constrained to then each constraint is associated with a Lagrange Multiplier. We find that each Lagrange Multiplier turns out to be equal to the gain or loss associated with the corresponding oscillator.
5.1.2 Lagrange multipliers as gain coefficients
As an example, Yamamoto et al. [7] analyze their parametric oscillator system using coupled wave equations for the slowly varying amplitudes of the circulating optical modes. We now show that the coupled wave equation approach reduces to an extremum of their system ‘Entropy Generation’ or ‘power dissipation’. The coupled-wave equation for parametric gain of the slowly varying amplitude of the in-phase cosine component of the -th optical pulse (representing magnetic spin in an Ising system) is as follows:
(1) |
where the weights, , are the magnetic ferromagnetic or anti-ferromagnetic cross-couplings, and represents the phase dependent parametric gain given by the pump to the -th circulating pulse, whereas is the corresponding dissipative loss. The amplitudes represent the optical cosine wave electric field amplitudes. In the case of circuits, the voltage amplitudes can be expressed as where and represent the cosine and sine quadrature components of voltage, and j is the unit imaginary. For clarity of discussion, we dropped the cubic terms in (1) that Yamamoto et al originally had. A discussion of these terms in given in Appendix C.
Owing to the nature of parametric amplification, the quadrature sine wave components of the electric field amplitude die out rapidly. The rate of entropy generation, or net power dissipation , including the negative dissipation associated with gain can be written:
(2) |
If we minimize the entropy generation without invoking any constraints, that is, with , the amplitudes simply go to zero, which generates the minimum entropy.
If the gain is large enough, some of the amplitudes might go to infinity. To avoid this, we may employ the constraint functions , which enforce a digital outcome. (Actually, a constraint of the form is quite general in the sense that can represent the first two terms of the Taylor Series of an arbitrary constraint.) Adding the constraint function to the entropy generation, yields the Lagrange function including the constraint functions times the respective Lagrange Multipliers:
(3) |
Comparing the unconstrained (2) to the constrained (3), they only differ in the final (-1) term which effectively constrains the amplitudes and prevents them from diverging to . Expression (3) is the Lagrange function given at the end of Section 4. Surprisingly, the gains emerge to play the role of Lagrange Multipliers. This means that each mode, represented by the subscripts in , must adjust to a particular gain which minimizes the overall entropy generation, and the respective gains represent the Lagrange Multipliers. Minimization of the Lagrange function (3) provides the final steady state of the system dynamics.
If the circuit or optical system is designed to dissipate power, or equivalently generate entropy, in a mathematical form that matches the magnetic energy in the Ising problem, then the dissipative system will seek out a corresponding local optimum configuration of the magnetic Ising energy.
Such a physical system, constrained to , is digital in the same sense as a flip-flop circuit, but unlike the von Neumann computer, the inputs are resistor weights for power dissipation. Nonetheless a physical system can evolve in a direct manner, without the need for shuttling information back and forth as in a von Neumann computer, providing faster answers. Without the communications overhead but with the higher operation speed, the energy dissipated to arrive at the final answer will be less, in spite of the circuit being required to generate entropy during its evolution toward the final state.
To achieve minimum entropy production, the amplitudes , and the Lagrange Multipliers , must all be simultaneously optimized. While a circuit will evolve toward optimal amplitudes , the gains must arise from a separate active circuit. Ideally, the active circuit which controls the Lagrange Multiplier gains , would have its entropy production included with the main circuit. A more common method is to provide gain that follows a heuristic rule using an external feedback circuit. For example, Yamamoto et al. follow the heuristic rule . It is not yet clear whether the heuristic-based approach toward gain evolution will be equally effective as simply lumping together all main circuit and feedback components and simply minimizing the total power dissipation.
We conclude this subsection by noting that the Lagrangian, Eq (3), corresponds to Lagrange multiplier optimization using the following merit function and constraints:
5.1.3 Conclusions from Optical Parametric Oscillators
-
1.
Physical systems minimize the entropy production rate, or power dissipation, subject to input constraints of voltage, amplitude, gain, etc.
-
2.
These systems actually perform Lagrange Multiplier optimization.
-
3.
Indeed, it is the gain in each oscillator that plays the role of the corresponding Lagrange Multiplier.
-
4.
If the Lagrange function is split in such a way that only the Ising function, , is treated as the merit function , then the Lagrange multipliers corresponding to the constraints are the net gains, .
-
5.
Under the digital constraint, amplitudes , entropy generation minimization schemes are actually binary, similar to a flip-flop.
5.2 Coupled Radio Oscillators on the real axis
5.2.1 Overview
A coupled LC oscillator system with parametric amplification was analyzed in the circuit simulator, SPICE, by Xiao et al., [12]. This is analogous to the optical Yamamoto system but this system consists of a network of radio frequency LC oscillators coupled to one another through resistive connections. The LC oscillators contain linear inductors but nonlinear capacitors which provide the parametric gain. The parallel or cross-connect resistive connections between the oscillators are designed to implement the ferromagnetic or anti-ferromagnetic couplings between magnetic dipole moments as shown in Fig. 7. The corresponding phase of the voltage amplitude , 0 or determines the sign of magnetic dipole moment .

The nonlinear capacitors are pumped by voltage at frequency , where the LC oscillator natural frequency is . Second harmonic pumping leads to parametric amplification in the oscillators. As in the optical case, parametric amplification plays the dual role of generating/sustaining voltage oscillations as well as imposing phase bistability to the ac voltages in the oscillators. The bistability refers to gain along the Real-Axis defined by time synchronization with the -pump.
The -pump induces gain in the Real-Axis quadrature. As in the case of optical parametric amplifier machines, ideally, an active circuit would control the Lagrange Multiplier gains , and the gain control circuit would have its entropy production included with the main circuit. A more common approach is to provide gain that follows a heuristic rule.
As in the optical parametric amplifier case, a mechanism is needed to prevent the parametric gain from producing infinite amplitude signals. Zener diodes can be inserted into the circuit to restrict the amplitudes to finite saturation values or to digitally defined values. With the diodes in place, the circuit settles into a voltage phase configuration, 0 or , that minimizes net power dissipation for a given pump gain.
5.2.2 Lagrange function and Lagrange multipliers
The amplitudes can be defined by the voltages across the LC oscillator capacitors and derived from Kirchoff’s voltage and current equations as in Xiao, [12]. Performing the slowly-varying amplitude approximation on the cosine component of these voltages, , Xiao obtains the following equation of motion:
(4) |
where the are the peak voltage amplitudes in units of a reference voltage, is the resistance of the coupling resistors, the cross-couplings are assigned binary values , is the linear part of the capacitance in each oscillator, is the number of oscillators, is the natural frequency of all the oscillators, and the parametric gain constant where is the capacitance modulation at the second harmonic. In the decay constant , there are resistors in parallel, since it is assumed that each oscillator can give up energy to all the other oscillators, with the coupling resistors acting in parallel. In this simplified model all decay constants are taken as equal, and moreover each oscillator experiences exactly the same parametric gain ; conditions that can be relaxed if needed.
The number 4 is present in the first denominator, since for two coupled LC circuits as shown in Fig. 7, the decay is controlled by the RC time of the capacitors in parallel and the resistors in series. Likewise, the number 4 appears for loss and parametric gain, but for different reasons in each case.
We note that equation (4) above performs gradient descent on the net power dissipation function:
(5) |
which is very similar to Section 5.1. On the right-hand side, the reason for the 4 in the first denominator is to compensate for double counting in . The first two terms on the right-hand side together represent the dissipative losses in the coupling resistors while the third term is the negative of the gain provided to the system of oscillators.
Next, we obtain the following Lagrange function through the same replacement of with that we performed in Section 5.1:
(6) |
The above Lagrangian corresponds to Lagrange multiplier optimization using the following merit function and constraints:
Again, we see that the gain coefficient is the Lagrange multiplier of the constraint .
5.2.3 Iterative Optimization as a Series of Time Steps
Although the extremum of Eq (6) represents the final evolved state of a physical system representing an optimization outcome, it would be interesting to examine the time evolution toward the optimal state. The optimization occurs by iterative steps, where each iteration can be regarded to take place in a time interval . At each successive iteration, the voltage amplitude takes a step whose magnitude is proportional to the gradient of the Lagrange function:
(7) |
where the minus sign on the right hand side drives the system toward minimum power dissipation. As the Lagrange function comes closer to its minimum, the gradient diminishes and the amplitude steps become smaller and smaller. The adjustable proportionality constant , controls the size of each iterative step; it also calibrates the dimensional units between power dissipation and voltage amplitude. (Since is voltage amplitude, has units of reciprocal capacitance.) Converting Eq (7) to continuous time,
(8) | |||
(9) |
where the play the role of Lagrange multipliers, and the are the constraints. Taking from Eq (6), the gradient in the voltage amplitudes becomes
Substituting into Eq (8), the time derivative of the voltage amplitudes becomes
(10) |
The constant can be absorbed into the units of time to reproduce the dynamical equation (4), the slowly varying amplitude approximation for the coupled radio oscillators. Thus, it is interesting that the slowly-varying time dynamics can be reproduced from iterative optimization steps on the Lagrange function.
5.2.4 Conclusions from Coupled Oscillators on the Real Axis
-
1.
As in other cases, the coupled LC oscillator system [12] minimizes the entropy production rate, or power dissipation, incorporating the power input from the pump gain.
-
2.
The coupled LC oscillator system actually performs Lagrange Multiplier optimization whose Merit Function is the Lagrange Function.
-
3.
The gain in each oscillator plays the role of the corresponding Lagrange Multiplier.
-
4.
Under the amplitude constraint on the voltage cosine component, , the entropy generation minimization scheme is actually binary, similar to a flip-flop.
-
5.
Successive iterations toward a power dissipation minimum, employing gradient descent in the amplitudes of the cosine components of the voltages , surprisingly, yield the same time-dependent Slowly Varying amplitude differential equation (4).
5.3 Coupled laser cavities using multicore fibers
5.3.1 Overview
The Ising solver designed by Babaeian et al., [13], makes use of coupled laser modes in a multicore optical fiber. Polarized light in each core of the optical fiber corresponds to each magnetic moment in the Ising problem. This means that the number of cores needs to be equal to the number of magnets in the given Ising instance. The right-hand circular polarization and left-hand circular polarization of the laser light in each core represent the two polarities (up and down) of the corresponding magnet. The mutual coherence of the various cores is maintained by injecting seed light from a master laser.
The coupling between the fiber cores is achieved through amplitude mixing of the laser modes by Spatial Light Modulators at one end of the multicore fiber, [13]. These Spatial Light Modulators couple light amplitude from the -th core to the -th core according to the prescribed connection weight .
5.3.2 Equations and comparison with Lagrange multipliers
As in prior physical examples the electric field amplitudes can be expressed in the slowly-varying polarization modes of the -th core, and , where the two electric field amplitudes are in-phase temporally, are positive real, but have different polarization. They are,
where is the decay rate in the optical core in 1/sec units, and is the gain supplied to the -th core. The first term on the right in both the equations represents optical fiber losses while the second term represents the gain provided. The third term represents the coupling between the -th and -th cores that is provided by the Spatial Light Modulators. We next define the degree of polarization as . (This differs from the usual definition of degree of polarization of an optical beam which contains intensity rather than electric field amplitude.) Subtracting the two equations above, we obtain the following evolution equation for :
The power dissipation, or entropy generation, is proportional to two orthogonal components of electric field, each squared, . But this can also be written . But can be regarded as relatively constant as energy switches back and forth between right and left circular polarization. Then the changes in power dissipation, or entropy generation, would be most influenced by quadratic terms in :
As before, we add the digital constraints where represents fully left or right circular optical polarization. With the constraints, the corresponding Lagrange function is:
Once again, the gains play the role of Lagrange multipliers. Thus, a minimization of the power dissipation, subject to the optical gain , solves the Ising problem defined by the same couplings.
The power dissipation function and constraint function in the Lagrange function above are:
5.3.3 Conclusions for Coupled Multicore Fibers
- 1.
-
2.
The coupled multicore fiber system actually performs Lagrange Multiplier optimization.
-
3.
The gain in each fiber plays the role of the corresponding Lagrange Multiplier.
-
4.
Under the constraint of completely light polarization, , the entropy generation minimization procedure is actually binary, similar to a flip-flop.
5.4 Coupled Electrical Oscillators on the Unit Circle
5.4.1 Overview
In this section, we consider a network of interconnected, nonlinear, but amplitude-stable electrical oscillators designed by Roychowdhury et al. [14] to represent a conventional Ising system for which we seek a digital solution with each magnetic dipole along the z-axis in the magnetic dipole space. To solve this, Roychowdhury et al. provide a dissipative system of LC oscillators, somewhat similar to the Optical Parametric Oscillators in Section 5.1, but with oscillation amplitude clamped, and oscillation phase or revealing the preferred magnetic dipole orientation . It is noteworthy that Roychowdhury goes beyond Ising machines and constructs general digital logic gates using these amplitude-stable oscillators in [28].
In the construction of their oscillators, Roychowdhury et al. [14] use nonlinear elements that behave like negative resistors at low voltage amplitudes but with saturating resistance at high voltage amplitudes. This produces of amplitude-stable oscillators. In addition, Roychowdhury et al. [14] provide a second harmonic pump and use a form of parametric amplification (referred to as sub-harmonic injection locking in [14]) to obtain bistability with respect to phase.
The dynamics of the amplitude saturation is purposely very fast and the oscillators are essentially clamped to an amplitude limit dictated by the nonlinear resistor in each oscillator. The phase dynamics induced by the 2nd harmonic pump are slower. It is the readout of these phase shifts, 0 or that provides the magnetic dipole orientation . One key difference between this system and Yamamoto’s Optical Parametric Oscillator (OPO) system is that the OPO system had fast phase dynamics and slow amplitude dynamics, while the injection locking system has the reverse.
5.4.2 Equations and comparison with Lagrange Multipliers
Roychowdhury et al. [14] derived the dynamics of their amplitude stable oscillator network using perturbation concepts developed in [29]. While a circuit diagram is not shown, [14] invokes the following dynamical equation for the phases of their electrical oscillators:
(11) |
where is a coupling resistance in their system, is the phase of the -th electrical oscillator, and the are decay parameters that dictate how fast the phase angles settle towards their steady state values.
We shall now show that Eq (11) can be reproduced by iteratively minimizing the power dissipation in their system. Power dissipation across a resistor is where is the voltage difference across resistor . Since and are sinusoidal, the power dissipation consists of constant terms and a cross-term of the form:
where is the power dissipated in the resistors. Magnetic dipole orientation parallel or anti-parallel is represented by whether electrical oscillator phase difference is or respectively. We are permitted to choose an origin for angle space at which implies or . This can be implemented through a constraint of the form:
Combining the power dissipated in the resistors with the constraint function we obtain a Lagrange function:
where is the Lagrange Multiplier corresponding to the phase angle constraint, and is a digital multiplier to the conductance .
The Lagrange function above is isomorphic with the general form in Section 4. The effective merit function and constraints in this correspondence are:
5.4.3 Conclusions from Coupled Oscillators on the Unit Circle
-
1.
As in other cases, the amplitude-stable oscillator system [14] minimizes the entropy production rate, or power dissipation, subject to constraints of amplitude and phase reference.
-
2.
The amplitude-stable oscillator system actually performs Lagrange Multiplier optimization.
-
3.
The phase decay time constant in each oscillator plays the role of the corresponding Lagrange Multiplier.
-
4.
Under the phase reference constraint, or , the entropy generation minimization scheme is actually binary, similar to a flip-flop.
5.5 Coupled polariton condensates
5.5.1 Overview
Kalinin and Berloff [15] proposed a system consisting of coupled polariton condensates to minimize the XY Hamiltonian. The XY Hamiltonian is a 2 dimensional version of the Ising Hamiltonian and is given by:
where the represents the magnetic moment vector of the -th spin restricted to the spin-space XY plane.
To represent the spin system, Kalinin and Berloff pump a grid of coupled semiconductor microcavities with laser beams and observe the formation of strongly coupled exciton-photon states called polaritons. For our purposes, the polaritonic nomenclature is irrelevant. For us, these are simply coupled electromagnetic cavities similar to optical and LC resonators that we have already discussed. The electromagnetic system operates by the principle of minimum entropy generation similar to the previous cases. The complex electromagnetic amplitude in the -th microcavity can be written , where and represent the cosine and sine quadrature components of , and j is the unit imaginary. In this case we identify as representing the X-component of the magnetic dipole vector, and representing the Y-component of the magnetic dipole vector. The electromagnetic microcavity system settles into a state of minimum entropy generation as the laser pump and optical gain are ramped up to compensate for the intrinsic cavity losses. The phase angles in the complex plane of the final electromagnetic modes are then reported as the corresponding -magnetic moment angles in the XY plane.
An important point to note here is that the electromagnetic cavities experience normal phase-independent gain and not parametric gain which happens to be phase-dependent. As a consequence, this system does NOT seek phase bistability as appropriate for binary up/down spin orientations. This is because we are searching for the magnetic dipole vector angles in the XY plane which would minimize the corresponding XY magnetic energy.
5.5.2 Lagrange function and Lagrange multipliers
Ref. [15] uses ‘Ginzburg-Landau’ equations to analyze their system resulting in equations for the complex amplitudes of the polariton wavefunctions. But the are actually the complex electric field amplitudes of the corresponding -th cavity. The electric field amplitudes satisfy the following slowly-varying-amplitude equation:
(12) |
where represents the optical gain, represents nonlinear attenuation, represents nonlinear phase shift, and is a dissipative cross-coupling-term representing linear loss. We note from the above that both the amplitudes and phases of the electromagnetic modes are coupled to each other and evolve on comparable timescales. This is in contrast to ref. [14] where the main dynamics were embedded in phase—amplitude was fast and almost fixed—or conversely [12] embedded in amplitude—phase was fast and almost fixed.
We shall now show that the method of ref. [15] is essentially the method of Lagrange multipliers with an added ‘rotation’. The entropy generation or power dissipation rate is:
If we add a saturation constraint, , then by analogy to the previous sections, is reinterpreted as a Lagrange Multiplier:
(13) |
where is the Lagrange function that represents power dissipation combined with the amplitude constraint . Thus, the scheme of coupled polaritonic resonators operates to find the state of minimum entropy generation in steady-state, similar to the parametric oscillator case of Section 5.1. That is, the steady-state solution of dynamics (12) renders (13) stationary with respect to changes in . The difference is that the coupled polaritonic system solves the XY Ising problem for a magnetic moment restricted to the magnetic XY plane, while the parametric oscillator system, in Section 5.1, solves the Z Ising problem. For any particular mathematical optimization that we would perform, we still retain the burden of specifying that dissipative system whose entropy generation matches the optimization that we are seeking.
The imaginary ‘rotation’ term, , could potentially be of use in developing more sophisticated algorithms than the method of Lagrange multipliers and we discuss this prospect in some detail in Section 6.2 where a system with a similar, but more general, ‘rotation’ term is discussed.
5.5.3 Iterative Evolution of Lagrange Multipliers
Next, we discuss how Kalinin and Berloff et al. [15], adjusted their Lagrange multipliers during the course of their optimization. In the method of Lagrange multipliers, the merit-function, eq (13), is used to optimize not only the electric field amplitudes but also the Lagrange Multipliers . The authors in Sections 5.1 and 5.2 used simple heuristics to adjust their gains/decay constants which we have proven to be the Lagrange multipliers. Kalinin and Berloff of this section employ the Lagrange function optimization itself to adjust the gain/losses, as in the complete Lagrange method discussed next.
To iteratively adjust the Lagrange multipliers, we shall briefly shift back to the tutorial notation of Section 4. Afterward, we shall translate it back to the notation of this section and show that Kalinin, Berloff et al., indeed use this same iterative procedure to adjust their Lagrange multipliers.
The Lagrange function is given by , where are the field variables and are the Lagrange multipliers. Then, the procedure to find the optimal and is to perform gradient descent of in and gradient ascent of in . The reason for ascent in rather than descent is to more strictly penalize deviations from the constraint. In the language of iterations, this leads to the expressions:
where and are suitably chosen step sizes.
With our identification that the Lagrange multipliers are actually the same as the gains , we plug the expression for the full Lagrange function (13) into the second iterative equation, projecting out the constraint function , and take the limit . We obtain the following dynamical equation for the gains :
(14) |
The above equation for the iterative evolution of the Lagrange multipliers is indeed the very same evolution that Kalinin, Berloff et al. employ in their coupled polariton system.
To Eq (14), we must add the iterative evolution of the field variables :
(15) |
Equations (14) and (15) represent the full iterative evolution, but in some of the earlier sub-sections, was sometimes assigned a heuristic time dependence.
We conclude this sub-section by splitting the Lagrange function into the effective merit function , and the constraint function . The extra ‘phase rotation’ U is not captured by this interpretation.
5.5.4 Conclusions from Coupled Electromagnetic Cavities
-
1.
As in other cases, the coupled-cavity system [15] minimizes the steady state entropy production rate, or power dissipation, which includes the power input from the pump gain.
-
2.
The gain in each oscillator plays the role of the corresponding Lagrange Multiplier.
-
3.
There is a phase rotation term in the dynamics that is not captured by the Lagrange function framework.
6 Other methods in the literature
In this section, we look at other methods in the literature that do not explicitly implement the method of Lagrange Multiplier but nevertheless end up with dynamics that resemble it to varying extents. All these methods offer operation regimes where the dynamics is not analogous to Lagrange multiplier optimization, and we believe it is an interesting avenue of future work to study the capabilities of these regimes.
6.1 Iterative Analog Matrix Multipliers
Soljacic et al. [16] developed an iterative procedure consisting of repeated matrix multiplication to solve the Ising problem. Their algorithm was implemented on a photonic circuit that utilized on-chip optical matrix multiplication units composed of Mach-Zehnder interferometers that were first introduced for matrix algebra by Zeilinger et al. in [30]. Soljacic et al. showed that their algorithm performed optimization on an effective merit function, such as total magnetic Ising energy.
Let us use our insights from the previous sections and see how one would implement iterative optimization using an optical matrix multiplier. Let the multiple magnetic moment configuration of the Ising problem be represented as a vector of electric field amplitudes, , of the spatially-separated optical modes. Each mode field amplitude represents the value of each magnetic moment. In each iteration, the optical modes are fed into the optical circuit which performs matrix multiplication, and the resulting output optical modes are then fed back to the optical circuit input for the next iteration. Optical gain or some other type of gain sustains the successive iterations.
In this section we design an iterative optimization scheme for the Ising problem that involves only matrix multiplications in each iterative step and whose power dissipation function matches the magnetic Ising energy. A simple block diagram of such a scheme is shown in Fig. 8.

We wish to design the matrix multiplication unit such that it has the following power dissipation function:
The Lagrange function, including a binary constraint, , is given by:
(16) |
where the represents dissipative loss associated with electric field interference between optical modes in the Mach-Zehnder interferometers. The dissipation is compensated by optical gain in the system.
The iterative multiplicative procedure that evolves the electric fields toward the minimum of the Lagrange function Eq (16) is given by:
where we move from iteration to iteration by taking steps in proportional to the gradient of the Lagrange function. ( represents separate differentiation with respect to the two quadrature components.) With each iteration, we feed the output of the Mach-Zehnder interferometer matrix-multiplier array back to the input, compensating for the losses with gain. Calculating the gradient, we obtain:
where is a constant step size, whose dimensionality should ensure consistency of units. Sending all the terms involving time step to the right hand side, we finally get:
(17) |
where is the Kronecker delta that is 1 only if . The Mach-Zehnder interferometers should be tuned to the matrix . Thus, we have an iterative matrix multiplier scheme that minimizes the Lagrange function and performs Lagrange multiplier optimization of the Ising problem. In effect, a lump of dissipative optical circuitry, compensated by optical gain, will, in a series of iterations, settle into a solution of the Ising problem.
The simple system above differs from that of Soljacic et al. [16] quite significantly, in that their method has added noise and nonlinear thresholding after each iteration. It is possible that their modifications lead to performance improvements, or possibly it might work equally well as our more standard approach. A detailed description of the Soljacic approach is presented in Appendix E.
6.2 Leleu Mathematical Ising Solver
Leleu et al. [11] proposed a modified version of the Yamamoto’s Ising machine [7]. Leleu’s method significantly resembles the Lagrange method while incorporating important new features that might be important research directions in their own right. To understand the similarities and differences between Leleu’s method and that of Lagrange multipliers, we shall first list out the Lagrange multipliers equations of motion.
We recall the Lagrange function for the Ising problem that we encountered in Section 5:
In the above, are the optimization variables, is the interaction matrix, is the gain provided to the -th variable, and is the loss experienced by the -th variable. To find a local optimum that satisfies the constraints, one has to perform gradient descent on the Lagrange function in the variables and gradient ascent in the variables as discussed in Section 5.5. That is, the iterations one would need to perform are:
where and are suitably chosen step sizes. Substituting the expression for into the above and taking the limit of , we get the following equations of motion:
(18) | |||
(19) |
We shall compare these equations with those designed by Leleu et al. in [11]. Their system is described by the following equations:
(20) | |||
(21) |
where the represent the optimization variables as usual, represents the loss experienced by each degree of freedom, represents a common gain supplied to each variable, is some suitably chosen positive parameter, and the represent error coefficients that capture how far away each is from its desired unity saturation amplitude. Leleu had cubic terms in in [11] but we shall ignore them in the present discussion for the sake of simplicity. A discussion of these cubic terms is given in Appendix C.
It is clear at once that there are significant similarities between Leleu’s system and the Lagrange multiplier system. The optimization variables in both systems experience linear losses and gains and have interaction terms that capture the Ising interaction. Both systems have auxiliary variables that are varied according to how far away each degree of freedom is from its preferred saturation amplitude. However, the similarities end here.
A major differentiation in Leleu’s system is that multiplies the Ising interaction felt by the -th variable resulting in . On the other hand, the complementary coefficient is . This has the consequence that Leleu’s equations implements a system which has non-symmetric interactions between vector components and . The inclusion of non-symmetric matrix terms seems to be important because Leleu’s system achieves excellent performance on Ising problems in the Gset problem set as demonstrated in [11].
Let us obtain some intuition about this system by splitting the non-symmetric interaction term into the sum of a symmetric and anti-symmetric part. This follows from the fact that any matrix can be written as the sum of a symmetric matrix, , and an anti-symmetric matrix, . The symmetric part leads to gradient descent dynamics just as in Eq (20), similar to all the systems in Section 5. Conversely, the anti-symmetric part causes a energy-conserving ‘rotary’ motion in the vector space of . Leleu et al.’s non-symmetric approach seems to contain the same number of auxiliary parameters, versus , and so the secret of their improved performance seems to lie in this anti-symmetric part. We believe further analysis of this method might be a fruitful future research direction.
We recall Onsager’s reciprocity theorem [2] which states that the thermodynamic response coefficient must be equal to the response coefficient if time-reversal symmetry holds in the equilibrium state. This would imply that one might have to construct a magnetic system of some sort to break time-reversal symmetry in order to physically implement the asymmetry embedded in the dynamics designed by Leleu et al.
In conclusion, the symmetric part of Eq (20) takes iterative steps along the gradient of a dissipative merit function. The anti-symmetric part produce energy conserving rotations in the the vector space of optimization variables, . The associated dynamical freedom might provide a fruitful future research direction in optimization and deserves further study to ascertain its power.
7 Applications in Linear Algebra and Statistics
We have seen that minimum power dissipation solvers can address the Ising problem and various similar problems like the traveling salesman problem, etc. In this section, we provide yet another application of minimum entropy generation solvers to another optimization problem that appears frequently in statistics, namely curve fitting. In particular, we make the observation that the problem of linear least squares regression, linear curve fitting with a quadratic merit function, resembles the Ising problem. In fact, the electrical circuit example we presented in Section 5.2 can be applied to linear regression. We present such a circuit in this section. Our circuit provides a digital answer but requires a series of binary resistance values, that is, to represent arbitrary binary statistical input observations.
The objective of linear least squares regression is to fit a linear function to a given set of data . The s are input vectors of dimension while the are the observed outputs that we want our regression to capture. The linear function that is being fit is of the form where is a feature vector of length and is a vector of unknown weights multiplying the features vector . The vector is calculated by minimizing the sum of the squared errors it causes when used on an actual data set. This optimization problem may be represented as:
where is the -th component of the vector . The merit function, upon expansion, yields . This functional form is identical to that of the Ising Hamiltonian and we may construct an Ising circuit with with the weights acting like the unknown magnetic moments. There is an effective magnetic field in the problem, , and a fixed number which doesn’t play a role in the optimization. A simple circuit that solves the linear least squares problem for —the case where there are two features per instance and, consequently, two weights to be estimated—is provided in Fig. 9. This circuit provides weights upto 2-bit precision.

The two oscillators on the left hand side of the figure represent the and bits of the weight corresponding to the first feature while the oscillators on the right hand side are the corresponding bits of the second weight.
The cross-resistance that one would need to represent the that connects the -th and -th oscillators is calculated as:
where is a binary hierarchy of resistances based on a reference resistor , and are the binary digits of expressed as a binary number: . This represents to 3-bit precision using resistors that span a dynamic range . Further, the sign of the coupling is allotted according to whether the resistors are parallel-connected or cross-connected. In operation, the resistors would be externally programmed to the correct binary values, with many more bits than 3-bit precision, as given by the matrix product .
We have just solved the regression problem of the form , where matrix and vector were known measurements and the corresponding best weight vector for fitting was the unknown. We conclude by noting that this same procedure can be adopted to solve linear systems of equations of the form .
8 Discussion and Conclusion
Physics obeys a number of maximization or minimization principles such as the principle of Least Action, the principle of Minimum Entropy Generation, the Variational Principle, Physical Annealing, and the Adiabatic Principle (which in its quantum form is called Quantum Annealing).
Optimization is very significant in diverse fields ranging from scheduling and routing in operations research to protein folding in biology, porfolio optimization in finance, and energy minimization in physics. In this article, we made the observation that physics has optimization principles at its heart, and that they can be exploited to design fast, low power, digital solvers that avoid the limits of standard computational paradigms. Nature thus provides us with a means to solve optimization problems in all these areas including Engineering, Artificial Intelligence, Machine Learning (backpropagation), Control Theory, and Reinforcement Learning.
We reviewed 7 physical machines, proposed or built, that purported to solve the Ising problem and found that 6 of the 7 were performing Lagrange multiplier optimization; further, they also obey the Principle of Minimized Entropy generation (always subject to a power input constraint). This means that by appropriate choice of parameter values, these physical solvers can be used to perform Lagrange multiplier optimization orders-of-magnitude faster and with lower power than conventional digital computers. This performance advantage can be utilized for optimization in machine learning applications where energy and time considerations are critical.
The questions arise: What are the action items? And what is the most promising near term application? All the hardware approaches seem to work comparably well. The easiest to implement would be the electrical oscillator circuits, though the optical oscillator arrays can be compact and very fast. Electrically, there would two integrated circuits, the oscillator array, and the connecting resistors that would need to be reprogrammed for different problems. The action item could be to design the first chip consisting of about 1000 oscillators, and a second chip that would consist of the appropriate coupling resistor array, for a specific optimization problem. The resistors should be in an addressable binary hierarchy so that any desired resistance value can be programmed in by switches, within the number of bits accuracy. It is possible to imagine solving a new Ising problem ever milli-second, by reprogramming the resistor chip.
On the software side, a compiler would need to be developed to go from an unsolved optimization problem to the resistor array which matches that desired goal. If the merit function were mildly nonlinear, we believe that the Principle of Minimum Entropy generation would still hold, but there has been less background science justifying that claim.
With regard to the most promising near term application, it might be in Control Theory or in Reinforcement Learning in self-driving vehicles, where rapid answers are required, at modest power dissipation.
The act of computation can be regarded as a search among many possible answers. Finally, the circuit converges to a final correct configuration. Thus the initial conditions may include a huge phase space volume= of possible solutions, ultimately transitioning into a final configuration representing a small or modest-sized binary number. This type of computing implies a substantial entropy reduction. This led to Landauer’s admonition that computation costs of entropy decrease, and of energy, for a final answer with binary digits.
By the 2nd Law of Thermodynamics, such an entropy reduction must be accompanied by an entropy increase elsewhere. For example, in a dissipative circuit, electricity converts to heat. In Landauer’s viewpoint, the energy and entropy limit of computing was associated with the final acting of writing out the answer in -bits, assuming the rest of the computer was reversible. In practice, technology consumes times more than the Landauer limit, owing to the insensitivity of the transistors operating at 1Volt, when they could be operating at 10mVolts.
In the continuously dissipative circuits we have described here, the energy consumed would be infinite if we waited long enough for the system to reach the final optimal state. If we terminate the powering of our optimizer systems after they reach the desired final state answer, the energy consumed becomes finite. By operating at voltage 1Volt and by powering off after the desired answer is achieved, our continuously dissipating Lagrange optimizers could actually be closer to the Landauer limit than a conventional computer.
A controversial point relates to the quality of solutions that are obtained for NP-hard problems. The physical systems we are proposing evolve by steepest descent toward a local optimum, not a global optimum. Nonetheless, many of the authors of the 7 physical systems presented here have claimed to find better local optima than their competitors, due to special adjustments in their methods. Undoubtedly, some improvements are possible, but none of the 7 papers reviewed here claims to always find the one global optimum which would be NP-hard [31].
We have shown that a number of physical systems that perform optimization are acting through the Principle of Minimum Entropy generation, though other physics principles could also fulfill this goal. As the systems evolve toward an extremum, they perform Lagrange function optimization where the Lagrange Multipliers are given by the gain or loss coefficients that keep the machine running. Thus, Nature provides us with a series of physical Optimization Machines that are much faster and possibly more energy efficient than conventional computers.
The authors gratefully acknowledge useful discussions with Dr. Ryan Hamerly, Dr. Tianshi Wang, and Prof. Jaijeet Roychowdhury. The work of S. K. Vadlamani and Prof. E. Yablonovitch was supported by the National Science Foundation through the Center for Energy Efficient Electronics Science under Award ECCS-0939514 and the Office of Naval Research under grant #N00014-14-1-0505.
Appendix A Lagrange multipliers theory
In this appendix, we shall study in greater detail the method of Lagrange multipliers, one of the most well-known techniques for the solution of constrained optimization problems. At its core, it simply involves the modification of the merit function by adding terms that penalize constraint violations.
In the setting of constrained optimization, we are required to minimize an objective function with respect to all points that satisfy certain given inequality () and equality () constraints. Let us assume that the domain set , that is, the intersection of the domains of , is a subset of . Further, let the set of points in that satisfy all the constraints be called , the feasible set. Then, the optimization problem can be written as:
minimize | |||
subject to | |||
A.1 Lagrange function
We now define a new function in variables, called the Lagrange function, as follows:
The summations that were added to the plain objective function serve as constraint violation penalty terms. The coefficients multiplying the penalty terms, and , are known as Lagrange multipliers. The inequality Lagrange multipliers are constrained to be non-negative in order that the penalty that arises when the inequality constraints are not satisfied (i.e. when ) is non-negative. The equality Lagrange multipliers, , have no such restrictions.
The Lagrange function has the advantage that it helps us express the constrained optimization of as an unconstrained optimization of . That is, it can be shown that:
The minimization over in the feasible set, , on the left-hand side has turned into a minimization over in the entire domain, , on the right-hand side.
A.2 Karush-Kuhn-Tucker (KKT) sufficient conditions
The Lagrange function also appears in an important, related role. While the conditions for a point to be an unconstrained optimum of a differentiable function are expressed in terms of the gradient of , the optimality conditions for constrained optimization problems are naturally expressed in terms of the Lagrange function. These conditions are presented next.
A point is a local optimum of the function subject to the constraints if it satisfies the Karush-Kuhn-Tucker (KKT) sufficient conditions:
-
1.
Primal feasibility: The point is feasible, that is, it satisfies all the constraints.
-
2.
First-order condition: There exist Lagrange multipliers such that the following equation is satisfied:
-
3.
Second-order condition: In addition to the first-order condition, if all the concerned functions are twice differentiable, we require that the Hessian of with respect to be positive definite along all directions that respect the active constraints. That is, the following equation has to be satisfied:
where is defined as active constraints at .
-
4.
Complementary slackness: is satisfied for all the inequality constraints, .
-
5.
Dual feasibility: The Lagrange multipliers of all the inequality constraints satisfy , with the inequality being strict for active constraints.
We recognize condition number 2 as the one we encountered in Section 4 in the main text.
A.3 Saddle point nature of global and local optima
If the functions , , , and the feasible set satisfy certain special conditions (such as, for instance, the Slater condition), a phenomenon called strong duality occurs. Under such circumstances, it turns out that the minimization and maximization in the above equation can be switched:
When this occurs, it can be shown that the optimal pair that satisfies the KKT conditions actually forms a saddle point of the Lagrange function, . That is, one has:
This can be viewed as a simple motivation for the gradient descent of in and gradient ascent of in that we encountered in Section 5.5. In the next section, Appendix B, we shall follow Chapter 4 of the excellent book [25] and see how this descent/ascent procedure can be used to find local optima even if strong duality doesn’t hold.
Appendix B Lagrange multipliers algorithms: Augmented Lagrangian
In this section, we discuss the so-called ‘Augmented Lagrangian method of multipliers’, a popular algorithm used to obtain locally optimal solutions that satisfy the KKT conditions. We shall be only considering the case where there are no inequality constraints (and, consequently, no multipliers). This algorithm is discussed in detail in [25].
To motivate the ‘Augmented Lagrangian’ approach, let us observe the KKT conditions a bit closely. We conclude from the first-order condition that a locally optimum renders the function stationary. However, this doesn’t guarantee that is a minimum of . Indeed, observation of the second-order condition tells us that could be a saddle point of . This means that gradient descent-based algorithms will not converge to if the starting point is not in the correct region. It was to solve this problem that the ‘Augmented Lagrangian’ method was invented.
The Augmented Lagrange function, , is given by:
This function , for a suitable choice of , has the property that forms a minimum of and not just a saddle point as was the case with .
In other words, local optima of the original constrained optimization problem can be obtained if we perform an unconstrained optimization of . However, for this procedure to be a feasible solution approach, we would have to know the right . It has been shown in [25] that the way to do this is to perform gradient ascent of in the variables.
The ‘Augmented Lagrangian method of multipliers’ involves the repeated minimization of using progressively better estimates, , of . The algorithm starts off with an arbitrary starting point . It then performs the following steps repeatedly:
-
1.
Locally minimize and call the minimum point .
-
2.
.
The second step above corresponds to gradient ascent of in the variables. Basically, this method performs a fast gradient descent of in the directions in conjunction with a slow gradient ascent of in the directions. A dynamical system that performs this process in continuous time is given below:
Appendix C Application of Lagrange multipliers to the Ising problem; Cubic terms
In this appendix, we shall provide an explanation of the role that is played by the cubic terms—that were neglected in the main text—in the methods of Yamamoto et al. [7], Kalinin et al. [15], and others. It will turn out that the inclusion of cubic terms helps us implement the ‘Augmented Lagrangian method of multipliers’ from Appendix B.
The statement of the Ising problem is as follows:
minimize | |||
subject to |
The corresponding Lagrange function is given by:
Next, we write down the Augmented Lagrange function:
Substituting the above Augmented Lagrange function into the dynamical system provided at the end of Appendix B, we get:
where and are appropriately chosen step sizes. We notice the equivalence in form between this dynamical system and those in the papers discussed in the main text and conclude that the cubic terms that appear in most of those systems are in fact helping to implement the Augmented Lagrangian method of multipliers.
Appendix D Adiabatic method using Kerr nonlinear coupled oscillators
Researchers at Toshiba proposed an adiabatic Ising solver consisting of networks of coupled Kerr nonlinear oscillators [17], [32], [33]. The big difference between the machines we have studied so far and the Toshiba machine is that Toshiba uses dissipation-less optical systems and utilizes the adiabatic method to optimize the Ising Hamiltonian. By dissipation-less, we mean that the coupling between different oscillators is not dissipative but perfectly elastic.
Further, they replace parametric oscillators that use the nonlinearity with Kerr-parametric oscillators that possess both the and the nonlinearities. The nonlinearity gives rise to effects such as parametric amplification whereas the nonlinearity leads to intensity dependent refractive index change. A second-harmonic pump signal is used to achieve parametric amplification. The refractive index of the modes in the oscillators is modulated by their own intensities due to the third-order nonlinearity.
The slowly varying amplitude equations for the sine () and cosine () components of the electric field in their system are given by:
where and represent the cosine and sine components of the -th oscillator, refers to the strength of the parametric pumping, is the value of the third-order nonlinearity (Kerr coefficient), and is the strength of the coupling interaction between the oscillators.
We rotate the axes and recast the above set of equations in phasor notation as follows to elucidate the similarity between this method and that of Cambridge:
The first term on the right-hand side is the parametric gain, the second term is the nonlinear rotation caused by the Kerr nonlinearity, and the third term is the dissipation-less coupling between oscillators.
The machine, which follows the adiabatic principle, works by ramping up the value of the pump adiabatically. The authors show that their system Hamiltonian mimics the Ising Hamiltonian well when p becomes large enough.
We note the interesting fact that, though the authors have parametric gain, the system does not blow up to infinity due to the presence of the nonlinear rotating term. This term ensures that the quadrature vector keeps traversing gain and loss regions periodically in the phase space, hence keeping the amplitudes in check.
Appendix E Iterative Analog Matrix Multipliers
In this appendix, we present the system designed by Soljacic et al. [16]. We shall see that the simplified system we presented in Sec 6a of the main text differs from that of Soljacic et al. significantly in that their method has added noise and nonlinear thresholding after each iteration. It is possible that their modifications lead to performance improvements.
Formally, their iteration is given by:
where is the vector of electric field amplitude values at the beginning of the -th iteration, is the Heaviside step function that is 1 for positive and 0 for negative , is a zero-mean Gaussian random noise vector, and is a matrix given by , is the Ising connectivity matrix, , a real number, and , some suitably chosen matrix. More specifically, is chosen to have the same eigenvectors as . It will turn out that the eigenvalues of play the role of Lagrange multipliers.
The authors showed that under the condition of high noise , their system performs minimization of the following effective merit function:
where is some parameter dependent on the noise.
Using the fact that the matrix is chosen to have the same eigenvectors as , we rewrite the above merit function, modulo additive constants, as the following Lagrange function:
where the are the eigenvalues of the matrix , and the vector is the vector of electric field amplitudes expressed in the basis of eigenvectors of (that is, the eigenvectors of ). We see that the eigenvalues of play the role of Lagrange multipliers, albeit for different constraints than those required by the Ising problem. This difference seems to be caused by not being a diagonal matrix.
In conclusion, we interpret their algorithm as optimizing a Lagrange function with the merit function being the Ising Hamiltonian itself, and the constraints being that the components of the spin vector when expressed in the eigenvector basis of be restricted to 1 and -1.
References
- [1] Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nature Photonics, vol. 11, pp. 441–446, July 2017.
- [2] L. Onsager, “Reciprocal Relations in Irreversible Processes. II.,” Physical Review, vol. 38, pp. 2265–2279, Dec. 1931.
- [3] A. Lucas, “Ising formulations of many NP problems,” arXiv preprint arXiv:1302.5843, 2013.
- [4] S. Utsunomiya, K. Takata, and Y. Yamamoto, “Mapping of Ising models onto injection-locked laser systems,” Optics express, vol. 19, no. 19, pp. 18091–18108, 2011.
- [5] Z. Wang, A. Marandi, K. Wen, R. L. Byer, and Y. Yamamoto, “Coherent Ising machine based on degenerate optical parametric oscillators,” Physical Review A, vol. 88, no. 6, p. 063853, 2013.
- [6] A. Marandi, Z. Wang, K. Takata, R. L. Byer, and Y. Yamamoto, “Network of time-multiplexed optical parametric oscillators as a coherent Ising machine,” Nature Photonics, vol. 8, no. 12, p. 937, 2014.
- [7] Y. Haribara, S. Utsunomiya, and Y. Yamamoto, “Computational Principle and Performance Evaluation of Coherent Ising Machine Based on Degenerate Optical Parametric Oscillator Network,” Entropy, vol. 18, p. 151, Apr. 2016.
- [8] P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, and K. Aihara, “A fully programmable 100-spin coherent Ising machine with all-to-all connections,” Science, vol. 354, no. 6312, pp. 614–617, 2016.
- [9] T. Inagaki, K. Inaba, R. Hamerly, K. Inoue, Y. Yamamoto, and H. Takesue, “Large-scale Ising spin network based on degenerate optical parametric oscillators,” Nature Photonics, vol. 10, no. 6, p. 415, 2016.
- [10] T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-i. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science, vol. 354, pp. 603–606, Nov. 2016.
- [11] T. Leleu, Y. Yamamoto, P. L. McMahon, and K. Aihara, “Destabilization of Local Minima in Analog Spin Systems by Correction of Amplitude Heterogeneity,” Physical Review Letters, vol. 122, p. 040607, Feb. 2019.
- [12] T. P. Xiao, Optoelectronics for refrigeration and analog circuits for combinatorial optimization. PhD Thesis, UC Berkeley, 2019.
- [13] M. Babaeian, D. T. Nguyen, V. Demir, M. Akbulut, P.-A. Blanche, Y. Kaneda, S. Guha, M. A. Neifeld, and N. Peyghambarian, “A single shot coherent Ising machine based on a network of injection-locked multicore fiber lasers,” Nature Communications, vol. 10, p. 3516, Dec. 2019.
- [14] T. Wang and J. Roychowdhury, “OIM: Oscillator-Based Ising Machines for Solving Combinatorial Optimisation Problems,” in Unconventional Computation and Natural Computation (I. McQuillan and S. Seki, eds.), vol. 11493, pp. 232–256, Cham: Springer International Publishing, 2019.
- [15] K. P. Kalinin and N. G. Berloff, “Global optimization of spin Hamiltonians with gain-dissipative systems,” Scientific Reports, vol. 8, p. 17791, Dec. 2018.
- [16] C. Roques-Carmes, Y. Shen, C. Zanoci, M. Prabhu, F. Atieh, L. Jing, T. Dubček, C. Mao, M. R. Johnson, V. Čeperić, J. D. Joannopoulos, D. Englund, and M. Soljačić, “Heuristic recurrent algorithms for photonic Ising machines,” Nature Communications, vol. 11, p. 249, Dec. 2020.
- [17] H. Goto, K. Tatsumura, and A. R. Dixon, “Combinatorial optimization by simulating adiabatic bifurcations in nonlinear Hamiltonian systems,” Science Advances, vol. 5, p. eaav2372, Apr. 2019.
- [18] B. Molnár, F. Molnár, M. Varga, Z. Toroczkai, and M. Ercsey-Ravasz, “A continuous-time MaxSAT solver with high analog performance,” Nature Communications, vol. 9, p. 4864, Dec. 2018.
- [19] D. Pierangeli, G. Marcucci, and C. Conti, “Large-Scale Photonic Ising Machine by Spatial Light Modulation,” Physical Review Letters, vol. 122, p. 213902, May 2019.
- [20] D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” arXiv:2006.00828 [physics], June 2020. arXiv: 2006.00828.
- [21] I. Prigogine, Etude Thermodynamique des Phénomènes irréversibles. Editions Desoer, Liège, 1947. Chap V.
- [22] S. de Groot, Thermodynamics of Irreversible Processes. Interscience Publishers, Inc., New York, 1951. Chap X.
- [23] N. G. Dickson, M. W. Johnson, M. H. Amin, R. Harris, F. Altomare, A. J. Berkley, P. Bunyk, J. Cai, E. M. Chapple, and P. Chavez, “Thermally assisted quantum annealing of a 16-qubit problem,” Nature communications, vol. 4, p. 1903, 2013.
- [24] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
- [25] D. Bertsekas, Nonlinear programming. Athena Scientific, 1999.
- [26] https://web.stanford.edu/~yyye/yyye/Gset/.
- [27] U. Benlic and J.-K. Hao, “Breakout Local Search for the Max-Cutproblem,” Engineering Applications of Artificial Intelligence, vol. 26, pp. 1162–1173, Mar. 2013.
- [28] J. Roychowdhury, “Boolean Computation Using Self-Sustaining Nonlinear Oscillators,” Proceedings of the IEEE, vol. 103, pp. 1958–1969, Nov. 2015.
- [29] A. Demir, A. Mehrotra, and J. Roychowdhury, “Phase noise in oscillators: A unifying theory and numerical methods for characterization,” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 47, no. 5, pp. 655–674, 2000.
- [30] M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, “Experimental realization of any discrete unitary operator,” Physical Review Letters, vol. 73, pp. 58–61, July 1994.
- [31] R. M. Karp, “Reducibility among combinatorial problems,” in Complexity of computer computations, pp. 85–103, Springer, 1972.
- [32] H. Goto, “Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network,” Scientific reports, vol. 6, p. 21686, 2016.
- [33] H. Goto, “Quantum computation based on quantum adiabatic bifurcations of Kerr-nonlinear parametric oscillators,” Journal of the Physical Society of Japan, vol. 88, no. 6, p. 061015, 2019.