This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Framework for Quantum Finite-State Languages with Density Mapping

SeungYeop Baik Yonsei University
Seoul, Republic of Korea
{sybaik2006,sicheol.sung,emmous}@yonsei.ac.kr
Sicheol Sung Yonsei University
Seoul, Republic of Korea
{sybaik2006,sicheol.sung,emmous}@yonsei.ac.kr
Yo-Sub Han Yonsei University
Seoul, Republic of Korea
{sybaik2006,sicheol.sung,emmous}@yonsei.ac.kr
Abstract

A quantum finite-state automaton (QFA) is a theoretical model designed to simulate the evolution of a quantum system with finite memory in response to sequential input strings. We define the language of a QFA as the set of strings that lead the QFA to an accepting state when processed from its initial state. QFAs exemplify how quantum computing can achieve greater efficiency compared to classical computing. While being one of the simplest quantum models, QFAs are still notably challenging to construct from scratch due to the preliminary knowledge of quantum mechanics required for superimposing unitary constraints on the automata. Furthermore, even when QFAs are correctly assembled, the limitations of a current quantum computer may cause fluctuations in the simulation results depending on how an assembled QFA is translated into a quantum circuit.

We present a framework that provides a simple and intuitive way to build QFAs and maximize the simulation accuracy. Our framework relies on two methods: First, it offers a predefined construction for foundational types of QFAs that recognize special languages 𝖬𝖮𝖣n{\mathsf{MOD}}_{n} and 𝖤𝖰𝖴k{\mathsf{EQU}}_{k}. They play a role of basic building blocks for more complex QFAs. In other words, one can obtain more complex QFAs from these foundational automata using standard language operations. Second, we improve the simulation accuracy by converting these QFAs into quantum circuits such that the resulting circuits perform well on noisy quantum computers.

Keywords Quantum finite-state automata  \cdot Quantum circuits  \cdot Framework  \cdot Closure properties  \cdot Error mitigation

1 Introduction

The scientific field of quantum mechanics inspired an advent of quantum computational models in the 1980s [1]. These models have enabled the development of quantum algorithms designed to tackle problems that are infeasible on classical computers, thus lending to their advantage. Investment in quantum computing has exploded with the recent development of physical quantum computers. With greater funding, research on quantum resource optimization has been explored across multiple fields, such as chemistry, quantum simulation and computational theory [2, 3, 4, 5].

The intrinsic requirements of quantum algorithms propagate to the quantum computers on which they run. These requirements include the number of qubits, the accuracy of gates and topological structures. Currently, quantum computers fail to meet these requirements and rely on error suppression and error mitigation, both of which have implicit cost. Previous works have examined computational models with restrictions. [6, 7, 8, 9, 10]. We consider to use quantum finite-state automata (QFAs) that represent quantum computers with the restriction of limited qubits.

The concept of QFAs has been present since the inception of quantum computation [11], and extensive theoretical research has been conducted in this area [12, 13, 14, 15]. However, the realization of QFAs as quantum circuits has only recently emerged with the introduction of functional quantum computers [16]. This research area is still in its early stages, with researchers facing two main challenges: the difficulty of composing QFAs and the inaccuracies of quantum computers.

Constructing QFAs from scratch is challenging for those without knowledge of quantum mechanics or formal language theory. This complexity arises from the constraints on transitions in QFAs, where transitions must adhere to the properties of a unitary matrix. Properly constructed QFAs are still prone to obtaining practical results that deviate significantly from theoretical ones, due to the high error rate associated with quantum computers. This discrepancy is attributed to errors introduced during computation, which can alter the bit-representation of each state in the QFA circuit.

We propose a framework that addresses these challenges and enhances simulation accuracy by providing intuitive QFA composition methods alongside improvements to the transpilation process from QFAs to quantum circuits. Our framework enables users to compose complex QFAs from simple QFAs using language operations. Our proposed methodology extends the notable frameworks of classical automata, such as FAdo [17], FSA [18] and others [19, 20, 21, 22, 23, 24]. In our proposed framework, we also introduce the novel concept of density mapping, a method employed in our framework for determining the bit-representation of each state in a QFA during transpilation. Density mapping is designed to reduce the probability of fluctuation caused by bit-flip errors. Fig. 1 demonstrates how a user can compose and simulate quantum circuits using the proposed framework. The intended workflow of our proposed framework is threefold: (1) constructing simple QFAs, (2) combining these simple QFAs with language operations to compose a complex QFA and (3) transpiling the resulting complex QFA into a circuit using density mapping. The given workflow lends to a more intuitive approach to quantum circuit construction as well as improved simulation accuracy therein.

In summary, we propose a framework that facilitates the construction and accurate simulation of QFAs. Our main contributions are as follows.

  1. 1.

    We introduce a novel QFA construction of unary finite languages, which serves as the elementary units of our framework.

  2. 2.

    We prove closure properties of various language classes defined by QFAs to illustrate how our framework combines QFAs.

  3. 3.

    We propose the 𝒟\mathcal{D}-mapping technique to enhance the QFA simulation accuracy on quantum computers.

Refer to caption
Figure 1: The three main features of our framework are as follows: basic blocks, language operations and density mapping. Each feature is presented in (a) a formal description, (b) a circuit and (c) a graphical representation.

The rest of the paper is organized as follows. Section 2 provides a brief background on QFAs and their languages. In Section 3, we present details of our framework. We demonstrate the effectiveness of our density mapping in Section 4 by running experiments on the proposed framework. Finally, we summarize the results and outline possible future work in Section 5. Our framework is publicly accessible at https://github.com/sybaik1/qfa-toolkit.

2 Preliminaries

We first describe notations and definitions of two types of QFA: one is a measure-once (MO-) QFA and the other is a measure-many (MM-) QFA. We then define several types of languages recognized by MO-QFAs and MM-QFAs.

2.1 Basic Notations

Symbols \mathbb{R} and \mathbb{C} denote the set of real numbers and the set of complex numbers, respectively. For a finite set QQ with nn elements, Q{\mathbb{C}}^{Q} and Q×Q{\mathbb{C}}^{Q\times Q} denote the set of column vectors of dimensions nn and the set of matrices of sizes (n,n)(n,n) whose components are indexed by elements of QQ and Q×QQ\times Q, respectively. For an element aQa\in Q, |aQ\ket{a}\in{\mathbb{C}}^{Q} is a unit vector whose components are all zero, except an aa-indexed component. The set {|aaQ}\{\ket{a}\mid a\in Q\} is a standard basis of the vector space Q×1{\mathbb{C}}^{Q\times 1}. A vector ψQ×1\psi\in{\mathbb{C}}^{Q\times 1} has a unique expression as a linear combination:

ψ=aQαa|a,where αa for all aQ.\psi=\sum_{a\in Q}\alpha_{a}\cdot\ket{a},\text{where $\alpha_{a}\in{\mathbb{C}}$ for all $a\in Q$.}

Then, |ψ|:=aQ|αa|2|\psi|:=\sqrt{\sum_{a\in Q}|\alpha_{a}|^{2}} denotes the length of ψ\psi.

We use 𝐌T,𝐌1{\mathbf{M}}^{T},{\mathbf{M}}^{-1}, 𝐌¯\overline{{\mathbf{M}}} and 𝐌{\mathbf{M}}^{\dagger} to denote a transpose, inverse, conjugate and conjugate-transpose of a matrix 𝐌{\mathbf{M}}, respectively. det𝐌\det{\mathbf{M}} denotes the determinant of a given matrix 𝐌{\mathbf{M}}. A matrix 𝐔{\mathbf{U}} is considered unitary if it satisfies the equality: 𝐔1=𝐔{\mathbf{U}}^{-1}={\mathbf{U}}^{\dagger}. Given a finite set AA of indices and a set SAS\subseteq A, a projection matrix 𝐏S{\mathbf{P}}_{S} onto SS is a zero-one diagonal matrix where (𝐏S)(a,a)=1({\mathbf{P}}_{S})_{(a,a)}=1 if aSa\in S and (𝐏S)(a,a)=0({\mathbf{P}}_{S})_{(a,a)}=0, otherwise. For matrices 𝐌1A×A{\mathbf{M}}_{1}\in{\mathbb{C}}^{A\times A} and 𝐌2B×B{\mathbf{M}}_{2}\in{\mathbb{C}}^{B\times B} with AB=A\cap B=\emptyset, their direct sum is

𝐌1𝐌2=[𝐌1𝟎𝟎𝐌2](AB)×(AB).{\mathbf{M}}_{1}\oplus{\mathbf{M}}_{2}=\begin{bmatrix}{\mathbf{M}}_{1}&{\mathbf{0}}\\ {\mathbf{0}}&{\mathbf{M}}_{2}\end{bmatrix}\in{\mathbb{C}}^{(A\cup B)\times(A\cup B)}.

A superposition of nn qubits is formalized as a 2n2^{n}-dimensional complex vector with a length of 1, and its evolution is described by a unitary matrix. The basis vectors of the superposition are represented by |000,|100,,|111\ket{00\cdots 0},\ket{10\cdots 0},\ldots,\ket{11\cdots 1}, where each digit corresponds to the value of each qubit. The measurement of the superposition is formalized by an observable, which is a partitioning A1AkA_{1}\cup\cdots\cup A_{k} of the dimensions. When a superposition |ψ\ket{\psi} is measured with an observable, it collapses to one of the vectors |ψi:=PAi|ψ\ket{\psi^{\prime}_{i}}:=P_{A_{i}}\ket{\psi}, each with a probability ||ψi|2|\ket{\psi^{\prime}_{i}}|^{2}.

2.2 Quantum Finite-State Automata and Languages

A finite-state automaton (FA) is a model that simulates the behavior of a system changing its state between a fixed number of distinct states. The automaton updates its state in an online manner in response to external inputs—formalized as strings over a finite alphabet. A quantum finite-state automaton (QFA) is a quantum variant of FA, which can exist in a superposition comprised of a finite number of states. We consider two different types of QFAs: an MO-QFA, which is observed only at the end of sequential inputs [25], and an MM-QFA, which is observed after every step of the sequence [11].

Let Σ\Sigma denote a finite input alphabet, which consists of the symbols used in sequential input. A semi-QFA then describes the evolution of a QFA for each symbol. Fig. 2 shows how transitions of a semi-QFA are represented graphically.

Definition 1.

A semi-QFA is a tuple (Q,Σ,{𝐔γ}γΓ)(Q,\Sigma,\{{\mathbf{U}}_{\gamma}\}_{\gamma\in\Gamma}), where (1) QQ is a finite set of state, (2) {𝐔γ}γΓ\{{\mathbf{U}}_{\gamma}\}_{\gamma\in\Gamma} is the unitary matrix for each symbol γΓ\gamma\in\Gamma and (3) Γ:=Σ{#,$}\Gamma:=\Sigma\cup\{{\#},{\$}\} is the tape alphabet, with #{\#} and ${\$} as the start-of-string and end-of-string symbols, respectively.

Refer to caption
Figure 2: The relation between (a) a unitary transition matrix of a semi-QFA, (b) a change of superposition on the Bloch sphere according to a transition and (c) a graphical representation of the matrix’s transitions. Transitions of QFAs, for example |q0Ua|q0\ket{q_{0}}\mapsto U_{a}\ket{q_{0}}, are represented as a unitary transformation in Hilbert space.

We can define MO-QFAs and MM-QFAs as semi-QFAs with two defining distinctions: the inclusion of an initial state as well as that of final states, which are either accepting or rejecting.

Definition 2.

An MO-QFA is specified by a tuple111In some other papers, the tape alphabet of an MO-QFA does not include the end-of-string symbol. This does not affect the expressive power or the size of MO-QFAs [12]. (Q,Σ,{𝐔γ}γΓ,qinit,Qacc)(Q,\Sigma,\{{\mathbf{U}}_{\gamma}\}_{\gamma\in\Gamma},q_{\textsf{init}},Q_{\textsf{acc}}), where (1) a triple (Q,Σ,{𝐔γ}γΓ)(Q,\Sigma,\{{\mathbf{U}}_{\gamma}\}_{\gamma\in\Gamma}) is a semi-QFA; (2) qinitQq_{\textsf{init}}\in Q is an initial state; and (3) QaccQQ_{\textsf{acc}}\subseteq Q is a set of accepting states.

An MO-QFA reads each symbol of string one by one and updates its superposition of states at each based on their transition matrix corresponding to the symbol. After reading the entire string, the MO-QFA measure the the superposition of states. It accepts the string if and only if the measured state is an accepting state; otherwise, it reject the string.

Formally, for a string x=x1x2xnΓx=x_{1}x_{2}\cdots x_{n}\in\Gamma^{*} on the tape alphabet, we denote the unitary matrix 𝐔xn𝐔xn1𝐔x1{\mathbf{U}}_{x_{n}}{\mathbf{U}}_{x_{n-1}}\cdots{\mathbf{U}}_{x_{1}} as 𝐔x{\mathbf{U}}_{x}. We also denote 𝐏Qacc{\mathbf{P}}_{Q_{\textsf{acc}}}, 𝐏Qrej{\mathbf{P}}_{Q_{\textsf{rej}}} and 𝐏Qnon{\mathbf{P}}_{Q_{\textsf{non}}} as 𝐏acc{\mathbf{P}}_{\textsf{acc}}, 𝐏rej{\mathbf{P}}_{\textsf{rej}} and 𝐏non{\mathbf{P}}_{\textsf{non}}, respectively, for simplicity of notation. Then, the probability of an MO-QFA MM accepting string ww is |𝐏acc|ψ|2|{\mathbf{P}}_{\textsf{acc}}\ket{\psi}|^{2}, where |ψ\ket{\psi} is the superposition 𝐔#w$|qinit{\mathbf{U}}_{{\#}w{\$}}\ket{q_{\textsf{init}}} of the QFA after reading the end-of-string symbol.

Definition 3.

An MM-QFA is specified by a tuple (Q,Σ,{𝐔γ}γΓ,qinit,Qacc,Qrej)(Q,\Sigma,\{{\mathbf{U}}_{\gamma}\}_{\gamma\in\Gamma},q_{\textsf{init}},Q_{\textsf{acc}},Q_{\textsf{rej}}), where (1) (Q,Σ,{𝐔γ}γΓ)(Q,\Sigma,\{{\mathbf{U}}_{\gamma}\}_{\gamma\in\Gamma}) is a semi-QFA; (2) qinitQq_{\textsf{init}}\in Q is an initial state; (3) Qacc,QrejQQ_{\textsf{acc}},Q_{\textsf{rej}}\subseteq Q are sets of accepting and rejecting states, respectively. We denote a set of non-halting states Qnon:=Q(QaccQrej)Q_{\textsf{non}}:=Q\setminus(Q_{\textsf{acc}}\cup Q_{\textsf{rej}})

The MM-QFA reads the tape symbols one by one for a given string and updates the superposition according to the transition matrix, similar to MO-QFAs. However, unlike MO-QFAs, it measures their superposition at the end of each step, causing it to collapse onto one of three sub-spaces Qacc{\mathbb{C}}^{Q_{\textsf{acc}}}, Qrej{\mathbb{C}}^{Q_{\textsf{rej}}} or Qnon{\mathbb{C}}^{Q_{\textsf{non}}}. The MM-QFA then accept or reject the string immediately if the resulting superposition is on Qacc{\mathbb{C}}^{Q_{\textsf{acc}}} the MM-QFA MMor Qrej{\mathbb{C}}^{Q_{\textsf{rej}}}, respectively. Otherwise, it keeps reading the next symbol.

Kondacs and Watrous [11] introduced total states to formalize the decision process of MM-QFAs. A total state is a triple (ψ,pacc,prej)Q××(\psi,p^{\prime}_{\textsf{acc}},p^{\prime}_{\textsf{rej}})\in{\mathbb{C}}^{Q}\times{\mathbb{R}}\times{\mathbb{R}}, where paccp^{\prime}_{\textsf{acc}} and prejp^{\prime}_{\textsf{rej}} are the accumulated probabilities of acceptance and rejection. The vector ψ\psi denotes the superposition and the probability of non-halting by its direction and the square of its length, respectively. For an MM-QFA MM, TγT_{\gamma} is an evolution function of the internal state of MM with respect to γ\gamma defined as follows:

Tγ\displaystyle T_{\gamma} :(ψ,pacc,prej)(𝐏non𝐔γψ,pacc+Δpacc,prej+Δprej),\displaystyle:(\psi,p^{\prime}_{\textsf{acc}},p^{\prime}_{\textsf{rej}})\mapsto({\mathbf{P}}_{\textsf{non}}{\mathbf{U}}_{\gamma}\psi,p^{\prime}_{\textsf{acc}}+\Delta p^{\prime}_{\textsf{acc}},p^{\prime}_{\textsf{rej}}+\Delta p^{\prime}_{\textsf{rej}}),

where

Δpacc:=|𝐏acc𝐔γψ|2 and Δprej:=|𝐏rej𝐔γψ|2.\Delta p^{\prime}_{\textsf{acc}}:=|{\mathbf{P}}_{\textsf{acc}}{\mathbf{U}}_{\gamma}\psi|^{2}\text{ and }\Delta p^{\prime}_{\textsf{rej}}:=|{\mathbf{P}}_{\textsf{rej}}{\mathbf{U}}_{\gamma}\psi|^{2}.

For a string x=x1x2xnΓx=x_{1}x_{2}\cdots x_{n}\in\Gamma^{*} on the tape alphabet, we use TxT_{x} to denote TxnTxn1Tx1T_{x_{n}}\circ T_{x_{n-1}}\circ\cdots\circ T_{x_{1}}. Then, (ψ,pacc,prej):=T#w$(|qinit,0,0)(\psi,p_{\textsf{acc}},p_{\textsf{rej}}):=T_{{\#}w{\$}}(\ket{q_{\textsf{init}}},0,0) denotes the final total state, and paccp_{\textsf{acc}} is the probability that MM accepts a given string ww.

We say that an MM-QFA MM is valid if pacc+prej=1p_{\textsf{acc}}+p_{\textsf{rej}}=1 for any wΣw\in\Sigma^{*}. If pacc=0p_{\textsf{acc}}=0 for every wΣw\in\Sigma^{*} such that (_,pacc,_):=T#w(|qinit,0,0)(\_,p_{\textsf{acc}},\_):=T_{{\#}w}(\ket{q_{\textsf{init}}},0,0), then we say that MM is end-decisive. In other words, MM does not accept strings before reading the end-of-string symbol. Similarly, MM is co-end-decisive if it only rejects strings when reading the end-of-string symbol [12]. For the rest of the paper, we focus solely on valid MM-QFAs.

Let MM be an MO-QFA or an MM-QFA and fM(w)f_{M}(w) denote the probability that a given MM accepts ww. We call the function fM:Σf_{M}:\Sigma^{*}\to\mathbb{R} the stochastic language of MM. We say an MO- or MM-QFA MM recognizes LL with a positive one-sided error of margin ε>0\varepsilon>0 if (1) fM(w)=0f_{M}(w)=0 for wLw\notin L and (2) fM(w)>εf_{M}(w)>\varepsilon. In other words, MM always rejects wLw\notin L and accepts wLw\in L with probability fM(w)>εf_{M}(w)>\varepsilon. Similarly, we say MM recognizes LL with a negative one-sided error of margin ε>0\varepsilon>0 if (1) fM(w)=1f_{M}(w)=1 for wLw\in L and (2) fM(w)<1ϵf_{M}(w)<1-\epsilon for wLw\notin L.

We define the following classes of quantum finite-state languages (QFLs) 𝖬𝖮𝖰𝖥𝖫{\mathsf{MOQFL}}, 𝖬𝖬𝖰𝖥𝖫{\mathsf{MMQFL}}, 𝖬𝖬𝖰𝖥𝖫𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{end}} and 𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{coend}} as those recognized by MO-QFAs, MM-QFAs, end-decisive MM-QFAs and co-end-decisive MM-QFAs, respectively. We also define the class of complements for each language family, which are denoted using a prefix 𝖼𝗈{\mathsf{co}}-. For example, 𝖼𝗈-𝖬𝖮𝖰𝖥𝖫{\mathsf{co\text{-}MOQFL}} denotes the class {LΣL¯𝖬𝖮𝖰𝖥𝖫}\left\{L\subseteq\Sigma^{*}\mid\overline{L}\in{\mathsf{MOQFL}}\right\}.

2.3 Closure Properties of Stochastic Languages of QFAs

Stochastic languages of QFAs possess known closure properties, specific to MO-QFAs under the following operations [25, 13, 26].

  1. (Hadamard product)

    (fMfN)(w)=fM(w)fN(w)(f_{M}\otimes f_{N})(w)=f_{M}(w)\cdot f_{N}(w)

  2. (Convex linear combination)

    For c1,c2[0,1]c_{1},c_{2}\in[0,1] with c1+c2=1c_{1}+c_{2}=1,

    (c1fMc2fN)(w)=c1fM(w)+c2fN(w)(c_{1}f_{M}\oplus c_{2}f_{N})(w)=c_{1}\cdot f_{M}(w)+c_{2}\cdot f_{N}(w)
  3. (Complement)

    fM¯(w)=1fM(w)\overline{f_{M}}(w)=1-f_{M}(w)

  4. (Inverse homomorphism)

    (h1fM)(w)=fM(h1(w))(h^{-1}\circ f_{M})(w)=f_{M}(h^{-1}(w))

Compared to the stochastic languages of MO-QFAs, those of MM-QFAs are closed under linear combination, complement and inverse homomorphism [13, 27] but not under the Hadamard product. For the cases of end-decisive and co-end-decisive MM-QFAs, their stochastic languages are closed under linear combination and inverse homomorphism. However, end-decisive MM-QFAs, in particular, are also closed under the Hadamard product [27]. Table 1 summarizes the closure properties of stochastic languages of QFAs. We also present a construction for the convex linear combination to complete the paper.

Table 1: The closure properties of stochastic languages ff and gg for each class of QFAs. Check mark () and Cross mark () denotes whether the class is closed under the operation or not.
QFA Class fMfNf_{M}\otimes f_{N} c1fMc2fNc_{1}f_{M}\oplus c_{2}f_{N} fM¯\overline{f_{M}} h1fMh^{-1}\circ f_{M}
MO-QFA
MM-QFA
End-dec.
Co-end-dec.
Proposition 1.

Consider two MM-QFAs

  1. 1.

    M:=(Q,Σ,{𝐔γ}γΓ,qinit,Qacc,Qrej)M:=\left(Q,\Sigma,\{{\mathbf{U}}_{\gamma}\}_{\gamma\in\Gamma},q_{\textsf{init}},Q_{\textsf{acc}},Q_{\textsf{rej}}\right) and

  2. 2.

    N:=(P,Σ,{𝐕γ}γΓ,pinit,Pacc,Prej).N:=\left(P,\Sigma,\{{\mathbf{V}}_{\gamma}\}_{\gamma\in\Gamma},p_{\textsf{init}},P_{\textsf{acc}},P_{\textsf{rej}}\right).

For c1,c2[0,1]c_{1},c_{2}\in[0,1] with c1+c2=1c_{1}+c_{2}=1, the following MM-QFA c1Mc2Nc_{1}M\oplus c_{2}N satisfies (c1Mc2N)(w)=c1M(w)+c2N(w)(c_{1}M\oplus c_{2}N)(w)=c_{1}M(w)+c_{2}N(w).

c1Mc2N:=(QP,Σ,{𝐖γ}γΓ,qinit,QaccPacc,QrejPrej),c_{1}M\oplus c_{2}N:=(Q\cup P,\Sigma,\{{\mathbf{W}}_{\gamma}\}_{\gamma\in\Gamma},q_{\textsf{init}},Q_{\textsf{acc}}\cup P_{\textsf{acc}},Q_{\textsf{rej}}\cup P_{\textsf{rej}}),

where 𝐖γ:=𝐔γ𝐕γ{\mathbf{W}}_{\gamma}:={\mathbf{U}}_{\gamma}\oplus{\mathbf{V}}_{\gamma} for γΣ{$}\gamma\in\Sigma\cup\{{\$}\} and

𝐖#:=(𝐔#𝐕#)[c1𝟎ic2𝟎𝟎𝐈𝟎𝟎ic2𝟎c1𝟎𝟎𝟎𝟎𝐈](QP)×(QP).{\mathbf{W}}_{\#}:=\left({\mathbf{U}}_{\#}\oplus{\mathbf{V}}_{\#}\right)\cdot\begin{bmatrix}\begin{array}[]{cccc}\sqrt{c_{1}}&{\mathbf{0}}&i\sqrt{c_{2}}&{\mathbf{0}}\\ {\mathbf{0}}&{\mathbf{I}}&{\mathbf{0}}&{\mathbf{0}}\\ i\sqrt{c_{2}}&{\mathbf{0}}&\sqrt{c_{1}}&{\mathbf{0}}\\ {\mathbf{0}}&{\mathbf{0}}&{\mathbf{0}}&{\mathbf{I}}\\ \end{array}\end{bmatrix}\in{\mathbb{C}}^{(Q\cup P)\times(Q\cup P)}.

If M1M_{1} and M2M_{2} are both end-decisive or co-end-decisive, then the resulting MM is also end-decisive or co-end-decisive, respectively.

3 Proposed Framework for QFLs

It is more complex to construct QFAs as opposed to classic FAs due to the unitary constraints that a QFA must preserve. On top of the difficulties with QFAs, when constructing a quantum circuit that recognizes a QFL, one has to consider a margin with the accepting probability and rejecting probability—the accepting policy [12, 28]. Several accepting policies exist, such as one-sided errors, two-sided errors, isolated cut-points, threshold acceptance and threshold with margin [13]. In our framework, we only consider one-sided errors since they offer a compromise between the language coverage and the closure properties.

When designing a quantum circuit for a language, a typical process of making their QFA proceeds as follows: First, determine the number of states for the target QFA. Then, construct transition matrices for each of the states adhering to unitary constraints. Subsequently, verify if the created QFA can differentiate between accepting and rejecting strings. If it fails to do so, adjustments to the transitions design or an increase in the number of states are necessary. If the target QFA functions as intended, it can be transpiled into a quantum circuit.

The aforementioned process is not the sole paradigm of designing a circuit, but the continuous evolution of which, boasting both advancements and errors, are necessary in the development of the target quantum circuit. We overcome these errors and inconveniences by encapsulating the design process through the use of QFLs as building blocks. By doing so, we can employ operations between the building blocks to construct the desired QFL. In our framework, we provide constructions for the two languages 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} and 𝖤𝖰𝖴k{\mathsf{EQU}}_{k} as building blocks, leveraging them to create more complex and diverse languages. In the above scenario, instead of deciding on the size and transitions, we can decompose the language into a set of expressions with 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} and 𝖤𝖰𝖴k{\mathsf{EQU}}_{k}, then compose them to get the QFA. We can finally transpile the QFA with our density mapping to get the target quantum circuit.

3.1 Basic Blocks—Two QFLs with Simple Constructions

We settled on small and simple QFAs as the building blocks of our framework and give the characteristics and properties of two QFAs 𝖬𝖮𝖣n{\mathsf{MOD}}_{n} and 𝖤𝖰𝖴k{\mathsf{EQU}}_{k}.

First, we revisit 𝖬𝖮𝖣n:={ajj0modn}{\mathsf{MOD}}_{n}:=\{a^{j}\mid j\equiv 0\mod n\} that has several previous approaches due to its small and simple circuit [16]. It is known that there exists a two-state MO-QFA that recognizes the language 𝖬𝖮𝖣n{\mathsf{MOD}}_{n} with a negative one-sided bounded error. Furthermore, for each prime pp, there exists an O(logp)O(\log p)-state MO-QFA recognizing 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} with an error margin of 1/81/8, which is independent of the choice of pp. The circuit of 𝖬𝖮𝖣n{\mathsf{MOD}}_{n} and 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} requires only one and O(loglogp)O(\log{\log p}) qubits, respectively [28]. Next, we present our construction of an MM-QFA for 𝖤𝖰𝖴k:={ak}{\mathsf{EQU}}_{k}:=\{a^{k}\}.

Theorem 1.

Let {a}\{a\} be a unary alphabet. For a singleton language 𝖤𝖰𝖴k{\mathsf{EQU}}_{k}, and parameters θ,φ\theta,\varphi with 0<θ,φ<π20<\theta,\varphi<\frac{\pi}{2}, let Mk(θ,φ)M_{k}(\theta,\varphi) be an co-end-decisive MM-QFA defined as follows.

Mk:=(Q,{a},{𝐔γ}γ{#,a,$},q0,{qacc},{qrej}),M_{k}:=(Q,\{a\},\{{\mathbf{U}}_{\gamma}\}_{\gamma\in\{{\#},a,{\$}\}},q_{0},\{q_{\textsf{acc}}\},\{q_{\textsf{rej}}\}),

where Q:={q0,q1,qacc,qrej}.Q:=\{q_{0},q_{1},q_{\textsf{acc}},q_{\textsf{rej}}\}. The specific unitary operators are given by

𝐔#:=[cosθsinθ00sinθcosθ0000100001],𝐔a:=[cosφ0sinφ00100sinφ0cosφ00001] and 𝐔$:=Ck[𝟎𝐀k𝐀k𝟎],{\mathbf{U}}_{\#}:=\begin{bmatrix}\cos\theta&-\sin\theta&0&0\\ \sin\theta&\cos\theta&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix},{\mathbf{U}}_{a}:=\begin{bmatrix}\cos\varphi&0&-\sin\varphi&0\\ 0&1&0&0\\ \sin\varphi&0&\cos\varphi&0\\ 0&0&0&1\end{bmatrix}\text{ and }{\mathbf{U}}_{\$}:=\sqrt{C_{k}}\cdot\begin{bmatrix}{\mathbf{0}}&{\mathbf{A}}_{k}^{\dagger}\\ {\mathbf{A}}_{k}&{\mathbf{0}}\end{bmatrix},

where 𝐀k{\mathbf{A}}_{k} is a sub-matrix given by

𝐀k:=[coskφcosθsinθsinθcoskφcosθ],{\mathbf{A}}_{k}:=\begin{bmatrix}\cos^{k}\varphi\cdot\cos\theta&\sin\theta\\ -\sin\theta&\cos^{k}\varphi\cdot\cos\theta\end{bmatrix},

and the factor CkC_{k} is given by Ck:=cos2kφcos2θ+sin2θ.C_{k}:=\cos^{2k}\varphi\cos^{2}\theta+\sin^{2}\theta. Then, Mk(θ,φ)M_{k}(\theta,\varphi) recognizes the language {ak}\{a^{k}\} with a negative error bound with a margin ε\varepsilon satisfying

0<ε<Ck1(cosθsinθcoskφ(1cosφ))2.0<\varepsilon<C_{k}^{-1}\cdot(\cos\theta\sin\theta\cos^{k}\varphi\cdot(1-\cos\varphi))^{2}.
Proof.

Fix M:=Mk(θ,φ)M:=M_{k}(\theta,\varphi) for some parameters kk, θ\theta and φ\varphi. The validity of MM is obvious from the construction. For each j0j\geq 0, let T#aj(|q0,0,0)=:(ψj,pacc,j,prej,j).T_{{\#}a^{j}}(\ket{q_{0}},0,0)=:(\psi_{j},p_{{\textsf{acc}},j},p_{{\textsf{rej}},j}). Then, the values of components are ψ0=𝐏nonU#|q0=cosθ|q0+sinθ|q1\psi_{0}={\mathbf{P}}_{\textsf{non}}U_{\#}\ket{q_{0}}=\cos\theta\ket{q_{0}}+\sin\theta\ket{q_{1}} and ψj+1=𝐏nonUaψj\psi_{j+1}={\mathbf{P}}_{\textsf{non}}U_{a}\psi_{j}. By induction, ψj=coskφcosθ|q0+sinθ|q1.\psi_{j}=\cos^{k}\varphi\cos\theta\ket{q_{0}}+\sin\theta\ket{q_{1}}. Now we consider the final total state T#aj$(|q0,0,0)=:(ψ,pacc(j),prej(j)).T_{{\#}a^{j}{\$}}(\ket{q_{0}},0,0)=:(\psi,p_{\textsf{acc}}(j),p_{\textsf{rej}}(j)). The rejecting probability of a string aja^{j} is

prej(j)\displaystyle p_{\textsf{rej}}(j) =|𝐏rej𝐔$ψj|2=|𝐏rej𝐔$(cosjφcosθ|q0+sinθ|q1)|2\displaystyle=|{\mathbf{P}}_{\textsf{rej}}{\mathbf{U}}_{\$}\psi_{j}|^{2}=|{\mathbf{P}}_{\textsf{rej}}{\mathbf{U}}_{\$}(\cos^{j}\varphi\cos\theta\cdot\ket{q_{0}}+\sin\theta\cdot\ket{q_{1}})|^{2}
=Ck1|(coskφcosjφ)cosθsinθ|qrej|2,\displaystyle=C_{k}^{-1}\cdot|(\cos^{k}\varphi-\cos^{j}\varphi)\cos\theta\sin\theta\cdot\ket{q_{\textsf{rej}}}|^{2},

and (ak)M=pacc(k)=1prej(k)=1.{}_{M}(a^{k})=p_{\textsf{acc}}(k)=1-p_{\textsf{rej}}(k)=1. From the observation that argminjkprej(j)=k+1,\operatorname*{{arg\,min}}_{j\neq k}p_{\textsf{rej}}(j)=k+1, we finally obtain

fM(aj)\displaystyle f_{M}(a^{j}) =1prej(j)1prej(k+1)=1Ck1(cosθsinθcoskφ(1cosφ))2.\displaystyle=1-p_{\textsf{rej}}(j)\leq 1-p_{\textsf{rej}}(k+1)=1-C_{k}^{-1}\cdot(\cos\theta\sin\theta\cos^{k}\varphi\cdot(1-\cos\varphi))^{2}.

One might need to execute the QFA matching process multiple times to obtain a correct result, as the probability of obtaining the correct results decreases with the increasing length of the input string. The subsequent theorem describes how many repetitions are required to achieve a predetermined probability of accuracy.

Theorem 2.

Fix the parameters θ0,φ0\theta_{0},\varphi_{0} and define α:=cosθ0\alpha:=\cos\theta_{0} and β:=cosφ0\beta:=\cos\varphi_{0}. Let Mk:=Mk(θ0,φ0)M_{k}:=M_{k}(\theta_{0},\varphi_{0}) for each kk. Then, (Mk(aj))N<1e0.37(M_{k}(a^{j}))^{N}<\frac{1}{e}\approx 0.37 for each jkj\neq k, where

N:=1(1α2)(1β)2+1α2β2k(1β)212Ω(k).N:=\frac{1}{(1-\alpha^{2})(1-\beta)^{2}}+\frac{1}{\alpha^{2}\beta^{2k}(1-\beta)^{2}}-1\in 2^{\Omega(k)}.
Proof.

Note that aja^{j} has the minimum reject probability when j=k+1j=k+1, except in the case where j=kj=k. We can compute the value of Mk(ak+1)M_{k}(a^{k+1}) as follows:

Mk(ak+1)\displaystyle M_{k}(a^{k+1}) =1(cosθsinθcoskφ(1cosφ))2cos2kφcos2θ+sin2θ\displaystyle=1-\frac{(\cos\theta\cdot\sin\theta\cdot\cos^{k}\varphi(1-\cos\varphi))^{2}}{\cos^{2k}\varphi\cdot\cos^{2}\theta+\sin^{2}\theta}
=αβ2k+(1α2)α2(1α2)β2k(1β)2α2β2k+(1α2)\displaystyle=\frac{\alpha\beta^{2k}+(1-\alpha^{2})-\alpha^{2}(1-\alpha^{2})\beta^{2k}(1-\beta)^{2}}{\alpha^{2}\beta^{2k}+(1-\alpha^{2})}
=(1+α2(1α2)β2k(1β)2αβ2k+(1α2)α2(1α2)β2k(1β)2)1\displaystyle=\left(1+\frac{\alpha^{2}(1-\alpha^{2})\beta^{2k}(1-\beta)^{2}}{\alpha\beta^{2k}+(1-\alpha^{2})-\alpha^{2}(1-\alpha^{2})\beta^{2k}(1-\beta)^{2}}\right)^{-1}
=(1+(1(1α2)(1β)2+1α2β2k(1β)21)1)1\displaystyle=\left(1+\left(\frac{1}{(1-\alpha^{2})(1-\beta)^{2}}+\frac{1}{\alpha^{2}\beta^{2k}(1-\beta)^{2}}-1\right)^{-1}\right)^{-1}
=(1+1N)1.\displaystyle=\left(1+\frac{1}{N}\right)^{-1}.

Then, Mk(aj)NMk(ak+1)N(1+1N)N<1eM_{k}(a^{j})^{N}\leq M_{k}(a^{k+1})^{N}\leq(1+\frac{1}{N})^{-N}<\frac{1}{e} for jkj\neq k. Therefore, when θ\theta and φ\varphi are fixed independently of LkL_{k}, one have to run Mk(θ,φ)M_{k}(\theta,\varphi) on ww for 2O(k)2^{O(k)}-times to decide wLkw\in L_{k} within a fixed probability. ∎

Note that Theorems 1 and 2 examine the condition where parameters θ\theta and φ\varphi are chosen independently to the target language LkL_{k}. The following proposition describes how our framework finds optimal parameters that maximize the error margin for each kk.

Proposition 2.

Let ω\omega be a unique positive solution of ωk+1+(k+1)ωk=0\omega^{k+1}+(k+1)\omega-k=0. Then, parameters θ=tan1ωk\theta=\tan^{-1}\sqrt{\omega^{k}} and φ=cos1ω\varphi=\cos^{-1}\omega maximize the error margin of Mk(θ,φ)M_{k}(\theta,\varphi).

Proof.

We want to maximize the rejection probability prej(θ,φ)p_{{\textsf{rej}}}(\theta,\varphi) of ak+1a^{k+1} where

prej(θ,φ)=(cosθsinθcoskφ(1cosφ))2cos2kφcos2θ+sin2θ.p_{\textsf{rej}}(\theta,\varphi)=\frac{(\cos\theta\sin\theta\cos^{k}\varphi(1-\cos\varphi))^{2}}{\cos^{2k}\varphi\cos^{2}\theta+\sin^{2}\theta}.

Let f(θ,φ):=(prej(θ,φ))1f(\theta,\varphi):=(p_{\textsf{rej}}(\theta,\varphi))^{-1} be an objective function. Then, the following holds.

(θk,φk)\displaystyle(\theta_{k},\varphi_{k}) :=argmaxθ,φprej(θ,φ)=argminθ,φf(θ,φ)\displaystyle:=\operatorname*{{arg\,max}}_{\theta,\varphi}p_{\textsf{rej}}(\theta,\varphi)=\operatorname*{{arg\,min}}_{\theta,\varphi}f(\theta,\varphi)
=argminθ,φ1sin2θ(1cosφ)2+1cos2θcos2kφ(1cosφ)2\displaystyle=\operatorname*{{arg\,min}}_{\theta,\varphi}\frac{1}{\sin^{2}\theta(1-\cos\varphi)^{2}}+\frac{1}{\cos^{2}\theta\cos^{2k}\varphi(1-\cos\varphi)^{2}}

Since ff is differentiable on the domain 0<θ,φ<π20<\theta,\varphi<\frac{\pi}{2}, the equation (θf)(θk,φk)=(φf)(θk,φk)=0(\partial_{\theta}f)(\theta_{k},\varphi_{k})=(\partial_{\varphi}f)(\theta_{k},\varphi_{k})=0 holds. Then, we obtain the following equations from the partial derivatives of ff with respect to θ\theta and ϕ\phi.

(θf)(θk,φk)\displaystyle(\partial_{\theta}f)(\theta_{k},\varphi_{k}) =2sinθkcos3θkcos2kφk(1cosφk)22cosθksin3θk(1cosφk)2\displaystyle=\frac{2\sin\theta_{k}}{\cos^{3}\theta_{k}\cos^{2k}\varphi_{k}(1-\cos\varphi_{k})^{2}}-\frac{2\cos\theta_{k}}{\sin^{3}\theta_{k}(1-\cos\varphi_{k})^{2}}
=2tan2θksin2θk2cos2θkcos2kφksin2θkcos2θktanθkcos2kφk(1cosφk)2\displaystyle=\frac{2\tan^{2}\theta_{k}\sin^{2}\theta_{k}-2\cos^{2}\theta_{k}\cos^{2k}\varphi_{k}}{\sin^{2}\theta_{k}\cos^{2}\theta_{k}\tan\theta_{k}\cos^{2k}\varphi_{k}(1-\cos\varphi_{k})^{2}}
=2(tan2θkcoskφk)(tan2θk+coskφk)sin2θktanθkcos2kφk(1cosφk)2=0\displaystyle=\frac{2(\tan^{2}\theta_{k}-\cos^{k}\varphi_{k})(\tan^{2}\theta_{k}+\cos^{k}\varphi_{k})}{\sin^{2}\theta_{k}\tan\theta_{k}\cos^{2k}\varphi_{k}(1-\cos\varphi_{k})^{2}}=0 (1)
(φf)(θk,φk)\displaystyle(\partial_{\varphi}f)(\theta_{k},\varphi_{k}) =2ksinφkcos2θkcos2k+1φk(1cosφk)2\displaystyle=\frac{2k\sin\varphi_{k}}{\cos^{2}\theta_{k}\cos^{2k+1}\varphi_{k}(1-\cos\varphi_{k})^{2}}
2sinφk(1cosφk)3(1sin2θk+1cos2θkcos2kφk)=0\displaystyle\phantom{=\,}-\frac{2\sin\varphi_{k}}{(1-\cos\varphi_{k})^{3}}\left(\frac{1}{\sin^{2}\theta_{k}}+\frac{1}{\cos^{2}\theta_{k}\cos^{2k}\varphi_{k}}\right)=0 (2)

Then, from Eq.s (1) and (2), we derive

tan2θk=coskφk\tan^{2}\theta_{k}=\cos^{k}\varphi_{k} (3)

and

k\displaystyle k =cos2k+1φkcos2θk(1cosφk)×(1sin2θk+1cos2kφkcos2θk),\displaystyle=\frac{\cos^{2k+1}\varphi_{k}\cos^{2}\theta_{k}}{(1-\cos\varphi_{k})}\times\left(\frac{1}{\sin^{2}\theta_{k}}+\frac{1}{\cos^{2k}\varphi_{k}\cos^{2}\theta_{k}}\right),

respectively. Combining these equations, we obtain

k\displaystyle k =cos2k+1φkcos2θk(1cosφk)×(1sin2θk+1cos2kφkcos2θk)\displaystyle=\frac{\cos^{2k+1}\varphi_{k}\cos^{2}\theta_{k}}{(1-\cos\varphi_{k})}\times\left(\frac{1}{\sin^{2}\theta_{k}}+\frac{1}{\cos^{2k}\varphi_{k}\cos^{2}\theta_{k}}\right)
=cosφktan4θkcos2θk(1cosφk)×(1sin2θk+1tan4θkcos2θk)\displaystyle=\frac{\cos\varphi_{k}\tan^{4}\theta_{k}\cos^{2}\theta_{k}}{(1-\cos\varphi_{k})}\times\left(\frac{1}{\sin^{2}\theta_{k}}+\frac{1}{\tan^{4}\theta_{k}\cos^{2}\theta_{k}}\right) (Eq.(3))\displaystyle(\because Eq.\leavevmode\nobreak\ \eqref{eq:tan})
=cosφksin4θk(1cosφk)cos2θk×(sin2θk+cos2θksin4θk)\displaystyle=\frac{\cos\varphi_{k}\sin^{4}\theta_{k}}{(1-\cos\varphi_{k})\cos^{2}\theta_{k}}\times\left(\frac{\sin^{2}\theta_{k}+\cos^{2}\theta_{k}}{\sin^{4}\theta_{k}}\right)
=cosφk(1cosφk)×sec2θk=cosφk(1cosφk)×(tan2θk+1)\displaystyle=\frac{\cos\varphi_{k}}{(1-\cos\varphi_{k})}\times\sec^{2}\theta_{k}=\frac{\cos\varphi_{k}}{(1-\cos\varphi_{k})}\times\left(\tan^{2}\theta_{k}+1\right)
=cosφk(1cosφk)×(coskφk+1)\displaystyle=\frac{\cos\varphi_{k}}{(1-\cos\varphi_{k})}\times\left(\cos^{k}\varphi_{k}+1\right) (Eq.(3))\displaystyle(\because Eq.\leavevmode\nobreak\ \eqref{eq:tan})
=cosk+1φk+(k+1)cosφk.\displaystyle=\cos^{k+1}\varphi_{k}+(k+1)\cos\varphi_{k}.

Therefore, if ω\omega is a unique positive solution of an equation ωk+1+(k+1)ωk=0\omega^{k+1}+(k+1)\omega-k=0, then the optimal parameters are θk=tan1ωk and φk=cos1ω.\theta_{k}=\tan^{-1}\sqrt{\omega^{k}}\text{ and }\varphi_{k}=\cos^{-1}\omega.

3.2 Closure Properties of QFLs

In our framework, we consider Boolean operation over 𝖰𝖥𝖫{\mathsf{QFL}}, and distinguish the closure and error-bound properties of each operation. The closure operations are considered for each language class: 𝖬𝖮𝖰𝖥𝖫{\mathsf{MOQFL}}, 𝖬𝖬𝖰𝖥𝖫{\mathsf{MMQFL}}, 𝖬𝖬𝖰𝖥𝖫𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{end}} and 𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{coend}}. These also include the basic constructions of each language. Unlike other classes that are defined based on cut-points as mentioned in previous literature[12, 28], the properties of these classes have only been indirectly discussed. We examine properties of one-sided error-bounded languages, including closure properties for operations, to clarify how our framework supports the operations.

From an MO- and MM-QFA, we can obtain their complement QFAs by exchanging the set of accepting states and the set of rejecting states. We establish the following properties from the construction that transforms an end-decisive MM-QFA from a co-end-decisive MM-QFA and vice versa.

Property 1.

𝖼𝗈-𝖬𝖮𝖰𝖥𝖫{\mathsf{co\text{-}MOQFL}} and 𝖼𝗈-𝖬𝖬𝖰𝖥𝖫{\mathsf{co\text{-}MMQFL}} are the class of languages recognized by MO-QFAs and MM-QFAs with a bounded positive one-sided error, respectively.

The proof of Property 1 follows from the closure property of stochastic languages of QFAs under complement.

Proof.

Let LL be a 𝖼𝗈-𝖬𝖮𝖰𝖥𝖫{\mathsf{co\text{-}MOQFL}}. By definition, L¯\overline{L} is a 𝖬𝖮𝖰𝖥𝖫{\mathsf{MOQFL}}, meaning there exists an MO-QFA MM and a margin ϵ>0\epsilon>0 such that wL¯fM(w)=1w\in\overline{L}\equiv f_{M}(w)=1 and wL¯fM(w)<1ϵw\notin\overline{L}\equiv f_{M}(w)<1-\epsilon. Given that stochastic languages are closed under complement, we can construct another MO-QFA NN such that fM(w)=1fN(w)f_{M}(w)=1-f_{N}(w). Therefore, wLfN(w)=0w\notin L\equiv f_{N}(w)=0 and wLfN(w)>ϵw\in L\equiv f_{N}(w)>\epsilon. Thus, MO-QFA NN recognizes LL with a bounded positive one-sided error. The proofs for the inverse direction are omitted for brevity. The proof for 𝖼𝗈-𝖬𝖬𝖰𝖥𝖫{\mathsf{co\text{-}MMQFL}} is straightforward and follows similarly. ∎

Property 2.

𝖼𝗈-𝖬𝖬𝖰𝖥𝖫𝖾𝗇𝖽{\mathsf{co\text{-}MMQFL}}_{\mathsf{end}} and 𝖼𝗈-𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{co\text{-}MMQFL}}_{\mathsf{coend}} are the class of languages recognized by co-end-decisive and end-decisive MM-QFAs with a bounded positive one-sided error, respectively.

Proof.

Property 2 directly results from the way in which the complement of an MM-QFA is constructed, which involves swapping the end-decisiveness and co-end-decisiveness. In the proof of Property 1, if MM is an end-decisive MM-QFA, then the corresponding MM-QFA NN can be co-end-decisive, and vice versa. ∎

We show that the operations are closed under stochastic language and correlate with the properties of QFLs.

Theorem 3.

Each class 𝖬𝖮𝖰𝖥𝖫{\mathsf{MOQFL}}, 𝖬𝖬𝖰𝖥𝖫{\mathsf{MMQFL}}, 𝖬𝖬𝖰𝖥𝖫𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{end}} and 𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{coend}} of languages is closed under intersection.

Proof.

The closure property of each language under intersection follows from the closure property of their corresponding QFAs under linear combination. ∎

Theorem 4.

𝖬𝖮𝖰𝖥𝖫{\mathsf{MOQFL}} and 𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{coend}} are closed under union.

Proof.

Let L1L_{1} and L2L_{2} be languages in 𝖬𝖮𝖰𝖥𝖫{\mathsf{MOQFL}}. By Property 1, there exist MO-QFAs M1M_{1} and M2M_{2} recognizing L1¯\overline{L_{1}} and L2¯\overline{L_{2}} with a positive one-sided bounded error. Hence, there exists an MO-QFA MM satisfying fM(w)=(fM1fM2)(w)f_{M}(w)=(f_{M_{1}}\otimes f_{M_{2}})(w) due to the closure property of MO-QFAs under Hadamard product. MM recognizes L1¯L2¯\overline{L_{1}}\cap\overline{L_{2}} with a positive one-sided error, and by Property 1 again, L1¯L2¯¯=L1L2\overline{\overline{L_{1}}\cap\overline{L_{2}}}=L_{1}\cup L_{2} is a 𝖬𝖮𝖰𝖥𝖫{\mathsf{MOQFL}} language. For the closure property of 𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{coend}}, we can utilize Property 2 instead of Property 1, and the closure property of end-decisive MM-QFAs with regard to the Hadamard product. ∎

The next statement directly follows the fact that there exists a co-end-decisive MM-QFA for each unary finite language. The corollary leverages the closure property of 𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{coend}} with regard to its union and the concept of singleton language construction from Theorem 1.

Corollary 1.

Every unary finite language is in 𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{coend}}.

Table 2 summarizes the upper bounds of operational complexities as well as the complexities of inverse homomorphism and word quotient—both of which can be easily established from the their constructions [25, 12].

Table 2: Upper bounds of operation complexities for QFL classes. Some of operations in this table are not yet supported by our framework. A dash (-) indicates that the upper bound of the operator is the same as the nearest non-empty row above.
QFL Class Union Intersection Inverse Homo. (h1h^{-1}) Word Quotient (w\w\backslash\cdot)
𝖬𝖮𝖰𝖥𝖫{\mathsf{MOQFL}} nmnm n+mn+m nn nn
𝖬𝖬𝖰𝖥𝖫{\mathsf{MMQFL}} n+mn+m
𝖬𝖬𝖰𝖥𝖫𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{end}} n+mn+m n+c(h)nhaltn+c(h)\cdot n_{\textsf{halt}} n+|w|nhaltn+|w|\cdot n_{\textsf{halt}}
𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{coend}} nmnm n+mn+m
c(h)c(h) and nhaltn_{\textsf{halt}} denote the max|h(σ)|\max|h(\sigma)| and |Qhalt||Q_{\textsf{halt}}|, respectively.

3.3 Density Mapping from QFA States to Qubit Basis Vectors

After a user composes QFAs for their QFLs using either simple constructions or language operations, the next step is to transpile these QFAs into quantum circuits. In the implementation of a QFA in a quantum computer with nn qubits, each state of the QFA is represented with nn-bits as one of 2n2^{n} basis vectors and the transitions are realized through a combination of quantum gates. Additionally, in the current era of noisy intermediate-scale quantum computing [29], we need to apply error mitigation methods that can compensate for possible errors.

Refer to caption
Figure 3: The expected error reduction of the proposed 𝒟\mathcal{D}-mapping. The accepting state q1q_{1} is mapped to the qubit basis |10000\ket{10000}. If a bit-flip error occurs on the fourth qubit, then the measured qubit basis is |10010\ket{10010} that describes the state qnq_{n}. It does not effect whether or not the string is accepted because both q1q_{1} and qkq_{k} are accepting.

We propose a technique called density (𝒟\mathcal{D}-) mapping which is applied before the transpilation stage of the quantum circuit. Rather than mapping QFA states to qubit basis vectors in an arbitrary manner, our 𝒟\mathcal{D}-mapping reorders these vectors based on the number of qubits with value 11. Then we assign the accepting and rejecting states to the basis vectors with a low and high number of qubits having the value 11, respectively.

𝒟\mathcal{D}-mapping maximizes the localization of accepting and rejecting states, thereby reducing the probability of bit-flip errors. These errors can arise from a decoherence or depolarization, and causes an accepting state to change into a rejecting state, and vice versa. Additionally, the basis vectors representing the accepting and rejecting states are separated by basis mapped to non-final states and basis that are not mapped to any states. This method helps to isolate the errors and reduce possible error situations. Fig. 3 conceptually illustrates how our proposal reduces a bit-flip error from altering the acceptance of a QFA.

4 Experimental Results and Analysis

We demonstrate the effectiveness of our 𝒟\mathcal{D}-mapping with experiments using our framework. Our experiments measure how accurately quantum circuits simulate an operation of 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} of O(logp)O(\log{p}) states and 𝖤𝖰𝖴k{\mathsf{EQU}}_{k} of 44 states by default construction. We compare the difference between ideal and experimental acceptance probabilities with and without the 𝒟\mathcal{D}-mapping. We take 100,000 shots and measure the number of acceptances and rejections for each string.

We use IBM’s Falcon r5.11 quantum processor for real quantum experiments. When simulating the quantum circuits with controlled error rates, we utilize Qiskit simulators222https://www.ibm.com/quantum/qiskit. The Qiskit simulator is configured to use the same basis gates and topological structure as the Falcon processor. We also set the error rates to follow the Falcon processor, except for the two types of errors—gate and readout errors. A mirroring factor ηg\eta_{g} is applied to each error rate of gate GG in the processor. The error rate of GG in the simulator is calculated as follows:

(Error rate of G in the simulator)=ηg×(Average error rate of G in the processor).\text{(Error rate of $G$ in the simulator)}=\eta_{g}\times\text{(Average error rate of $G$ in the processor)}.

Readout errors are adjusted using the mirroring factor ηr\eta_{r} and the readout error in the simulator is calculated similarly.

4.1 Difficulty of QFA simulation

We measured the 𝖬𝖮𝖣5{\mathsf{MOD}}_{5} language using the Falcon processor, as depicted in Fig. 4. This measurement was to demonstrate the current capabilities and state of quantum computers. Note that the automaton of 𝖬𝖮𝖣5{\mathsf{MOD}}_{5} accepts strings of length divisible by 55 with probability 11. Otherwise, it accepts the string with a probability less than 0.875. However, we can see that the measurements for the real-world computer accept strings with a probability close to 0.5. We analyze the low accuracy attributed to the large size of the resulting quantum circuit, exceeding 1,000 gates for even a length 2 input.

We also notice despite the small error rates of the simulator, the output of the simulator becomes inaccurate as the length of the string becomes longer. The inaccuracy is caused by the size of the circuit that grows linearly to the length of the input string. The accumulated errors eventually dominate and reduce the output closer to random guessing.

Our experiments from this point on are conducted using a quantum simulator to focus on the effectiveness of 𝒟\mathcal{D}-mapping. We use error mirroring rates designed to reflect the capabilities of near-future quantum computers with error suppression and mitigation.

Refer to caption
Figure 4: The experimental results for the language 𝖬𝖮𝖣5{\mathsf{MOD}}_{5}. Blue circles and orange squares indicate the accepting probability of strings, which are simulated on a real-world quantum computer and a simulator, respectively. The dashed line represents the 0.5 mark for random guessing. We set the mirroring factors ηg\eta_{g} and ηr\eta_{r} as 0.5%. In the case of an ideal computer with no errors, the accepting probabilities highlighted by gray boxes are 1.

4.2 Effectiveness of the Density Mapping on Simulators

We demonstrate the usefulness of our 𝒟\mathcal{D}-mapping for emulating QFAs. We evaluate the accepting probability of input strings with the automaton of 𝖬𝖮𝖣5{\mathsf{MOD}}_{5} implemented by quantum circuits. Fig. 5 compares the experimental results for two quantum circuits recognizing 𝖬𝖮𝖣5{\mathsf{MOD}}_{5}, one utilizes our mapping and the other does not.

Refer to caption
Figure 5: The difference in accepting probability from using the naive mapping to using 𝒟\mathcal{D}-mapping on 𝖬𝖮𝖣5{\mathsf{MOD}}_{5}. We set the mirroring factors ηg\eta_{g} and ηr\eta_{r} as 0.5%. The blue dots above the red line represent the increase in accepting probability of 𝒟\mathcal{D}-mapping compared to the naive mapping. Conversely, the blue dots below the red line represent the decrease in accepting probability. The strings highlighted with gray boxes should exhibit an increased accepting probability towards the ideal case, while other strings should show a decrease.

The experimental results confirm that 𝒟\mathcal{D}-mapping increases the accuracy of the simulation for most cases. The mapping increases the accepting probability of strings in 𝖬𝖮𝖣5{\mathsf{MOD}}_{5} up to length 3030 and decreases the accepting probability of strings that are not in 𝖬𝖮𝖣5{\mathsf{MOD}}_{5}. The effectiveness of 𝒟\mathcal{D}-mapping diminishes with longer strings. Both mapping methods are increasingly affected by accumulating errors as discussed in Section 4.1, resulting in similar outcomes.

4.3 Effectiveness of the Density Mapping in Various QFAs

Table 3 presents the mean absolute percentage error (MAPE) of the experimental results for 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} and 𝖤𝖰𝖴k{\mathsf{EQU}}_{k} implementations using the naive mapping and 𝒟\mathcal{D}-mapping. Each cell denotes the dissimilarity between ideal probabilities P(wi)P(w_{i}) and measured P¯(wi)\overline{P}(w_{i}) for inputs w1,,wnw_{1},\ldots,w_{n}, and a simulation with a lower MAPE indicates higher accuracy. MAPE is calculated as follows:

MAPE=1ni=1k|P(wi)P¯(wi)P(wi)|×100(%).{\text{MAPE}}=\frac{1}{n}\sum_{i=1}^{k}\left|\frac{P(w_{i})-\overline{P}(w_{i})}{P(w_{i})}\right|\times 100(\%).

We use strings with length from 22 to 4040 to compute the MAPE for each language. Note that the maximum achievable value of MAPE varies for each language, hence we cannot directly compare simulation accuracy between different languages using MAPE. For instance, consider the following cases: (1) ηg=0.1%\eta_{g}=0.1\% and ηr=0.1%\eta_{r}=0.1\%; and (2) ηg=1%\eta_{g}=1\% and ηr=1%\eta_{r}=1\%, where the latter case has greater error rate. Nevertheless, we use the according formulation to shows the difference between probabilities and its difference.

Table 3: The dissimilarity between the ideal results and the results of a simulator with naive and density mapping methods with mirroring factors (M.F.) ηg\eta_{g} and ηr\eta_{r}. Each value in the table represents the value of MAPE (%), which indicates the average dissimilarity in acceptance rates of strings.
M.F. (%) Naive Mapping Density Mapping
ηg\eta_{g} ηr\eta_{r} 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} 𝖤𝖰𝖴n{\mathsf{EQU}}_{n} 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} 𝖤𝖰𝖴n{\mathsf{EQU}}_{n}
p=3p=3 5 7 n=3n=3 5 7 3 5 7 3 5 7
0.1 2.49 1.64 1.35 0.07 0.08 0.07 1.43 1.32 1.35 0.06 0.07 0.07
0.1 0.5 2.90 1.35 1.25 0.09 0.09 0.10 1.44 1.17 1.65 0.09 0.10 0.09
1 2.82 1.85 1.43 0.10 0.11 0.08 1.92 1.21 1.66 0.13 0.11 0.10
0.1 12.76 7.44 5.80 0.10 0.09 0.11 6.13 4.85 6.22 0.11 0.11 0.11
0.5 0.5 13.08 7.16 6.01 0.14 0.17 0.13 6.24 4.64 6.04 0.14 0.13 0.16
1 12.86 7.10 5.87 0.19 0.17 0.20 6.08 4.85 6.25 0.18 0.15 0.18
0.1 24.18 13.83 11.02 0.19 0.21 0.18 11.66 8.39 10.94 0.19 0.19 0.19
1 0.5 24.26 13.73 10.91 0.23 0.20 0.23 11.37 8.28 11.21 0.22 0.18 0.22
1 24.46 13.94 11.18 0.25 0.23 0.29 11.31 8.36 11.15 0.24 0.26 0.27

The experimental results indicate that 𝒟\mathcal{D}-mapping is effective for QFAs of languages 𝖬𝖮𝖣3{\mathsf{MOD}}_{3}, 𝖬𝖮𝖣5{\mathsf{MOD}}_{5}, but not for 𝖬𝖮𝖣7{\mathsf{MOD}}_{7}. 𝒟\mathcal{D}-mapping reduces the difference between the accepting probabilities of ideal results and simulation for 𝖬𝖮𝖣3{\mathsf{MOD}}_{3} to 57.43%(1.43/2.49)57.43\%(1.43/2.49) of the former case and 46.24%(11.31/24.46)46.24\%(11.31/24.46) of the latter case. Similarly, for 𝖬𝖮𝖣5{\mathsf{MOD}}_{5}, 𝒟\mathcal{D}-mapping reduces the difference to 80.49%80.49\% and 59.97%59.97\% of the former and latter case, respectively.

Note that the number of quantum circuit for 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} increases with the growth of pp. We observe that the effect of 𝒟\mathcal{D}-mapping is not evident for 𝖬𝖮𝖣7{\mathsf{MOD}}_{7}, as the simulation remains inaccurate regardless of the mapping used. This emphasizes the challenge posed by the large size of 𝖬𝖮𝖣7{\mathsf{MOD}}_{7} QFAs and the resulting quantum circuits, which involve a significant number of gates.

We also examine the impact of 𝒟\mathcal{D}-mapping on QFAs of 𝖤𝖰𝖴n{\mathsf{EQU}}_{n}, although its effect is less pronounced compared to 𝖬𝖮𝖣p{\mathsf{MOD}}_{p}. Note that QFAs of 𝖤𝖰𝖴n{\mathsf{EQU}}_{n} are MM-QFAs. Unlike MO-QFAs for 𝖬𝖮𝖣p{\mathsf{MOD}}_{p}, simulations of MM-QFAs have the possibility of terminating before reading all symbols in each string. We suggest that the simulation for MM-QFAs receives less effect from gate errors because they may ignore trailing gates, thereby resulting in the reduced effectiveness of 𝒟\mathcal{D}-mapping.

The overall experimental results reveal that gate error rates dominate simulation accuracy for QFAs, and our experiments do not show a clear relation between readout error rates and simulation accuracy. This is primarily due to the fact that the quantum circuits for QFAs have few measurement gates.

5 Conclusions and Future Work

MO-QFA and MM-QFA are the two most simplest quantum computational models. Despite their simple structure, designing these models is complex because their transitions must satisfy the unitary constraints. Verification is also challenging due to the low accuracy of current quantum devices. We have developed the framework to facilitate easy construction and accurate simulation of QFAs. For the development of the framework, we have

  • composed a novel construction for 𝖤𝖰𝖴k{\mathsf{EQU}}_{k},

  • demonstrated the closure properties of 𝖬𝖮𝖰𝖥𝖫{\mathsf{MOQFL}}, 𝖬𝖬𝖰𝖥𝖫{\mathsf{MMQFL}}, 𝖬𝖬𝖰𝖥𝖫𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{end}} and 𝖬𝖬𝖰𝖥𝖫𝖼𝗈𝖾𝗇𝖽{\mathsf{MMQFL}}_{\mathsf{coend}},

  • introduced 𝒟\mathcal{D}-mapping to improve the accuracy of experiments on real-world devices,

  • showcased the basic functionalities of our framework.

Our experiments on real devices highlight a significant limitation of current quantum processors—the QFA simulations on these processors are close to the random guessing due to their low accuracy. On the other hand, the experimental results confirm that our density mapping increases the accuracy of the QFA simulations.

A major limitation of our framework is its inability to construct QFAs from a human-readable format while most FA frameworks support a direct FA construction from regular expressions. Furthermore, manual construction remains inevitable for some languages because some types of QFLs cannot be obtained by composing 𝖬𝖮𝖣p{\mathsf{MOD}}_{p} and 𝖤𝖰𝖴k{\mathsf{EQU}}_{k}. We aim to characterize the class of languages by identifying which languages and operations can generate these classes. We will then integrate these languages and operations into our framework and enable users to construct a broader variety of languages more easily.

Our density mapping technique reduces simulation errors by correlating each QFA state with an appropriate dimension of a quantum circuit. The accuracy in quantum circuits can also be increased by reducing the number of gates [16] or by assigning more physical qubits to represent a single logical qubit [30]. The interaction between the density mapping and these existing methods may affect their overall effectiveness. Thus, it is another future work to integrate our density mapping with other techniques for improving the overall performance.

References

  • Benioff [1980] Paul Benioff. The computer as a physical system: A microscopic quantum mechanical hamiltonian model of computers as represented by Turing machines. Journal of statistical physics, 22:563–591, 1980. doi: https://doi.org/10.1007/BF01011339.
  • Emu et al. [2023] Mahzabeen Emu, Salimur Choudhury, and Kai Salomaa. Quantum neural networks driven stochastic resource optimization for metaverse data marketplace. In 2023 IEEE 9th International Conference on Network Softwarization (NetSoft), pages 242–246, 2023. doi: https://doi.org/10.1109/NetSoft57336.2023.10175433.
  • Arrazola et al. [2020] Juan Miguel Arrazola, Alain Delgado, Bhaskar Roy Bardhan, and Seth Lloyd. Quantum-inspired algorithms in practice. Quantum, 4:307, 2020. doi: 10.22331/q-2020-08-13-307.
  • Hagan and Wiebe [2023] Matthew Hagan and Nathan Wiebe. Composite quantum simulations. Quantum, 7:1181, 2023. doi: 10.22331/q-2023-11-14-1181.
  • King [2023] Robbie King. An improved approximation algorithm for quantum max-cut on triangle-free graphs. Quantum, 7:1180, 2023. doi: 10.22331/q-2023-11-09-1180.
  • Nishimura and Ozawa [2009] Harumichi Nishimura and Masanao Ozawa. Perfect computational equivalence between quantum Turing machines and finitely generated uniform quantum circuit families. Quantum Information Processing, 8:13–24, 2009. doi: https://doi.org/10.22331/q-2023-03-20-956.
  • Nishimura [2003] Harumichi Nishimura. Quantum computation with restricted amplitudes. International Journal of Foundations of Computer Science, 14(05):853–870, 2003. doi: https://doi.org/10.1142/S0129054103002059.
  • Gainutdinova and Yakaryılmaz [2018] Aida Gainutdinova and Abuzer Yakaryılmaz. Unary probabilistic and quantum automata on promise problems. Quantum Information Processing, 17:1–17, 2018. doi: https://doi.org/10.1007/s11128-017-1799-0.
  • Vieira and Budroni [2022] Lucas B Vieira and Costantino Budroni. Temporal correlations in the simplest measurement sequences. Quantum, 6:623, 2022. doi: https://doi.org/10.22331/q-2022-01-18-623.
  • Huang et al. [2023] Ruo Cheng Huang, Paul M Riechers, Mile Gu, and Varun Narasimhachar. Engines for predictive work extraction from memoryful quantum stochastic processes. Quantum, 7:1203, 2023. doi: https://doi.org/10.22331/q-2023-12-11-1203.
  • Kondacs and Watrous [1997] Attila Kondacs and John Watrous. On the power of quantum finite state automata. In 38th Annual Symposium on Foundations of Computer Science, FOCS ’97, Miami Beach, Florida, USA, October 19-22, 1997, pages 66–75, 1997. doi: https://doi.org/10.1109/SFCS.1997.646094.
  • Brodsky and Pippenger [2002] Alex Brodsky and Nicholas Pippenger. Characterizations of 1-way quantum finite automata. SIAM Journal on Computing, 31(5):1456–1478, 2002. doi: https://doi.org/10.1137/S0097539799353443.
  • Bertoni et al. [2003] Alberto Bertoni, Carlo Mereghetti, and Beatrice Palano. Quantum computing: 1-way quantum automata. In Developments in Language Theory, 7th International Conference, DLT 2003, Szeged, Hungary, July 7-11, 2003, Proceedings, volume 2710, pages 1–20, 2003. doi: https://doi.org/10.1007/3-540-45007-6_1.
  • Ambainis et al. [1999] Andris Ambainis, Ashwin Nayak, Ammon Ta-Shma, and Umesh Vazirani. Dense quantum coding and a lower bound for 1-way quantum automata. In Proceedings of the thirty-first annual ACM symposium on Theory of computing, pages 376–383, 1999. doi: https://doi.org/10.1145/301250.301347.
  • Giannakis et al. [2015] Konstantinos Giannakis, Christos Papalitsas, and Theodore Andronikos. Quantum automata for infinite periodic words. In 2015 6th International Conference on information, intelligence, systems and applications (IISA), pages 1–6. IEEE, 2015. doi: https://doi.org/10.1109/IISA.2015.7388105.
  • Birkan et al. [2021] Utku Birkan, Özlem Salehi, Viktor Olejár, Cem Nurlu, and Abuzer Yakaryilmaz. Implementing quantum finite automata algorithms on noisy devices. In Computational Science - ICCS 2021 - 21st International Conference, Krakow, Poland, June 16-18, 2021, Proceedings, Part VI, volume 12747, pages 3–16, 2021. doi: https://doi.org/10.1007/978-3-030-77980-1_1.
  • Almeida et al. [2009] André Almeida, Marco Almeida, José Alves, Nelma Moreira, and Rogério Reis. FAdo and GUItar. In Implementation and Application of Automata, 14th International Conference, CIAA 2009, Sydney, Australia, July 14-17, 2009. Proceedings, volume 5642, pages 65–74, 2009. doi: https://doi.org/10.1007/978-3-642-02979-0_10.
  • Kanthak and Ney [2004] Stephan Kanthak and Hermann Ney. FSA: An efficient and flexible C++ toolkit for finite state automata using on-demand computation. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 510–517, 2004. doi: https://doi.org/10.3115/1218955.1219020.
  • Mousavi [2016] Hamoon Mousavi. Automatic theorem proving in Walnut. arXiv, abs/1603.06017, 2016. doi: https://doi.org/10.48550/arXiv.1603.06017.
  • Mohri et al. [1997] Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. A rational design for a weighted finite-state transducer library. In Automata Implementation, Second International Workshop on Implementing Automata, WIA ’97, London, Ontario, Canada, September 18-20, 1997, Revised Papers, volume 1436 of Lecture Notes in Computer Science, pages 144–158. Springer, 1997. doi: https://doi.org/10.1007/BFb0031388.
  • Allauzen et al. [2007] Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wojciech Skut, and Mehryar Mohri. OpenFst: A general and efficient weighted finite-state transducer library. In Jan Holub and Jan Zdárek, editors, Implementation and Application of Automata, 12th International Conference, CIAA 2007, Prague, Czech Republic, July 16-18, 2007, Revised Selected Papers, volume 4783, pages 11–23, 2007. doi: https://doi.org/10.1007/978-3-540-76336-9_3.
  • May and Knight [2006] Jonathan May and Kevin Knight. Tiburon: A weighted tree automata toolkit. In Implementation and Application of Automata, 11th International Conference, CIAA 2006, Taipei, Taiwan, August 21-23, 2006, Proceedings, volume 4094, pages 102–113, 2006. doi: https://doi.org/10.1007/978-3-031-07469-1.
  • Arrivault et al. [2017] Denis Arrivault, Dominique Benielli, François Denis, and Rémi Eyraud. Scikit-SpLearn: a toolbox for the spectral learning of weighted automata compatible with scikit-learn. In Conference francophone sur l’Apprentissage Aurtomatique, 2017. URL https://hal.science/hal-01777169/.
  • Brainerd [1969] Walter S. Brainerd. Tree generating regular systems. Information and Control, 14(2):217–231, 1969. doi: https://doi.org/10.1016/s0019-9958(69)90065-5.
  • Moore and Crutchfield [2000] Cristopher Moore and James P. Crutchfield. Quantum automata and quantum grammars. Theoretical Computer Science, 237(1-2):275–306, 2000. doi: https://doi.org/10.1016/S0304-3975(98)00191-1.
  • Ambainis et al. [2001] Andris Ambainis, Arnolds Ķikusts, and Māris Valdats. On the class of languages recognizable by 1-way quantum finite automata. In Annual Symposium on Theoretical Aspects of Computer Science, pages 75–86, 2001. doi: https://doi.org/10.1007/3-540-44693-1_7.
  • Bianchi and Palano [2010] Maria Paola Bianchi and Beatrice Palano. Behaviours of unary quantum automata. Fundamenta Informaticae, 104(1-2):1–15, 2010. doi: https://doi.org/10.3233/FI-2010-333.
  • Ambainis and Freivalds [1998] Andris Ambainis and Rusins Freivalds. 1-way quantum finite automata: Strengths, weaknesses and generalizations. In 39th Annual Symposium on Foundations of Computer Science, FOCS ’98, November 8-11, 1998, Palo Alto, California, USA, pages 332–341, 1998. doi: https://doi.org/10.1109/SFCS.1998.743469.
  • Preskill [2018] John Preskill. Quantum computing in the NISQ era and beyond. Quantum, 2:79, 2018. doi: https://doi.org/10.22331/q-2018-08-06-79.
  • Ghosh et al. [2012] Joydip Ghosh, Austin G Fowler, and Michael R Geller. Surface code with decoherence: An analysis of three superconducting architectures. Physical Review A, 86(6):062318, 2012. doi: https://doi.org/10.1103/PhysRevA.86.062318.