Watermarking PRFs against Quantum Adversaries
Abstract
We initiate the study of software watermarking against quantum adversaries. A quantum adversary generates a quantum state as a pirate software that potentially removes an embedded message from a classical marked software. Extracting an embedded message from quantum pirate software is difficult since measurement could irreversibly alter the quantum state. In software watermarking against classical adversaries, a message extraction algorithm crucially uses the (input-output) behavior of a classical pirate software to extract an embedded message. Even if we instantiate existing watermarking PRFs with quantum-safe building blocks, it is not clear whether they are secure against quantum adversaries due to the quantum-specific property above. Thus, we need entirely new techniques to achieve software watermarking against quantum adversaries.
In this work, we define secure watermarking PRFs for quantum adversaries (unremovability against quantum adversaries). We also present two watermarking PRFs as follows.
-
•
We construct a privately extractable watermarking PRF against quantum adversaries from the quantum hardness of the learning with errors (LWE) problem. The marking and extraction algorithms use a public parameter and a private extraction key, respectively. The watermarking PRF is unremovable even if adversaries have (the public parameter and) access to the extraction oracle, which returns a result of extraction for a queried quantum circuit.
-
•
We construct a publicly extractable watermarking PRF against quantum adversaries from indistinguishability obfuscation (IO) and the quantum hardness of the LWE problem. The marking and extraction algorithms use a public parameter and a public extraction key, respectively. The watermarking PRF is unremovable even if adversaries have the extraction key (and the public parameter).
We develop a quantum extraction technique to extract information (a classical string) from a quantum state without destroying the state too much. We also introduce the notion of extraction-less watermarking PRFs as a crucial building block to achieve the results above by combining the tool with our quantum extraction technique.
Keywords: watermarking, pseudorandom function, post-quantum cryptography
1 NTT Corporation, Tokyo, Japan
{fuyuki.kitagawa.yh,ryo.nishimaki.zk}@hco.ntt.co.jp
0.50.9
1[0.5,0](0,.25)
1 Introduction
1.1 Background
Software watermarking is a cryptographic primitive that achieves a digital analog of watermarking. A marking algorithm of software watermarking can embed an arbitrary message (bit string) into a computer software modeled as a circuit. A marked software almost preserves the functionality of the original software. An extraction algorithm of software watermarking can extract the embedded message from a marked software. Secure software watermarking should guarantee that no adversary can remove the embedded message without significantly destroying the functionality of the original software (called unremovability).
Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan, and Yang [BGI+12] initiate the study of software watermarking and present the first definition of cryptographically secure software watermarking. Hopper, Molnar, and Wagner [HMW07] also study the definition of cryptographically secure watermarking for perceptual objects. However, both works do not present a secure concrete scheme. A few works study secure constructions of watermarking for cryptographic primitives [NSS99, YF11, Nis13, Nis19], but they consider only restricted removal strategies. Cohen, Holmgren, Nishimaki, Wichs, and Vaikuntanathan [CHN+18] present stronger definitions for software watermarking and the first secure watermarking schemes for cryptographic primitives against arbitrary removal strategies. After the celebrated work, watermarking for cryptographic primitives have been extensively studied [BLW17, KW21, QWZ18, KW19, YAL+19, GKM+19, YAYX20, Nis20].
Primary applications of watermarking are identifying ownership of objects and tracing users that distribute illegal copies. Watermarking for cryptographic primitives also has another exciting application. Aaronson, Liu, Liu, Zhandry, and Zhang [ALL+21] and Kitagawa, Nishimaki, and Yamakawa [KNY21] concurrently and independently find that we can construct secure software leasing schemes by combining watermarking with quantum cryptography.111Precisely speaking, Aaronson et al. achieve copy-detection schemes [ALL+21], which are essentially the same as secure software leasing schemes. Secure software leasing [AL21] is a quantum cryptographic primitive that prevents users from generating authenticated pirated copies of leased software.222Leased software must be a quantum state since classical bit strings can be easily copied. Since watermarking has such an exciting application in quantum cryptography and quantum computers might be an imminent threat to cryptography due to rapid progress in research on quantum computing, it is natural and fascinating to study secure software watermarking in the quantum setting.
In quantum cryptography, building blocks must be quantum-safe such as lattice-based cryptography [Reg09]. However, even if we replace building blocks of existing cryptographic primitives/protocols with quantum-safe ones, we do not necessarily obtain quantum-safe cryptographic primitives/protocols [BDF+11, ARU14]. We sometimes need new proof techniques which are different from classical ones due to quantum specific properties such as no-cloning and superposition access [Wat09, Zha12b, Zha12a, Unr12, Zha19, CMSZ21]. Even worse, we must consider entirely different security models in some settings. Zhandry [Zha20] studies traitor tracing [CFN94] in the quantum setting as such an example. In quantum traitor tracing, an adversary can output a quantum state as a pirate decoder. Zhandry shows that we need new techniques for achieving quantum traitor tracing because running a quantum pirate decoder to extract information may irreversibly alter the state due to measurement.
Zhandry [Zha20] refers to software watermarking as a cryptographic primitive that has a similar issue to quantum traitor tracing. However, his work focuses only on traitor tracing and does not study software watermarking against quantum adversaries. If we use software watermarking in the quantum setting, an adversary can output a quantum state as a pirate circuit where an embedded message might be removed. However, previous works consider a setting where an adversary outputs a classical pirate circuit. It is not clear whether watermarking schemes based on quantum-safe cryptography are secure against quantum adversaries because we need an entirely new extraction algorithm to extract an embedded message from a quantum pirate circuit. Thus, the main question in this study is:
Can we achieve secure watermarking for cryptographic primitives against quantum adversaries?
We affirmatively answer this question in this work.
1.2 Our Result
Our main contributions are two-fold. One is the definitional work. We define watermarking for pseudorandom functions (PRFs) against quantum adversaries, where adversaries output a quantum state as a pirate circuit that distinguishes a PRF from a random function.333This definitional choice comes from the definition of traceable PRFs [GKWW21]. See Sections 1.3 and 1.4 for the detail. The other one is constructing the first secure watermarking PRFs against quantum adversaries. We present two watermarking PRFs as follows.
-
•
We construct a privately extractable watermarking PRF against quantum adversaries from the quantum hardness of the learning with errors (LWE) problem. This watermarking PRF is secure in the presence of the extraction oracle and supports public marking. That is, the marking and extraction algorithms use a public parameter and secret extraction key, respectively. The watermarking PRF is unremovable even if adversaries have access to the extraction oracle, which returns a result of extraction for a queried quantum circuit.
-
•
We construct a publicly extractable watermarking PRF against quantum adversaries from indistinguishability obfuscation (IO) and the quantum hardness of the LWE problem. This watermarking PRF also supports public marking. That is, the marking and extraction algorithms use a public parameter and a public extraction key, respectively. The watermarking PRF is unremovable (we do not need to consider the mark and extraction oracles since it supports public marking and public extraction).
The former and latter PRFs satisfy weak pseudorandomness and standard (strong) pseudorandomness even against a watermarking authority, respectively.
We develop a quantum extraction algorithm to achieve the results above. Zhandry [Zha20] presents a useful technique for extracting information from quantum states without destroying them too much. However, we cannot simply apply his technique to the watermarking setting. Embedded information (arbitrary string) is chosen from an exponentially large set in the watermarking setting. On the other hand, in the traitor tracing setting, we embed a user index, which could be chosen from a polynomially large set, in a decryption key. Zhandry’s technique is tailored to traitor tracing based on private linear broadcast encryption (PLBE) [BSW06] where user information is chosen from a polynomially large set with linear structure. Thus, we extend Zhandry’s technique [Zha20] to extract information chosen from an exponentially large set. We also introduce the notion of extraction-less watermarking as a crucial tool to achieve watermarking against quantum adversaries. This tool is a suitable building block for our quantum extraction technique in our watermarking extraction algorithm. These are our technical contributions. See Section 1.3 for the detail.
Although this paper focuses on watermarking PRFs against quantum adversaries, it is easy to extend our definitions to watermarking public-key encryption (PKE) against quantum adversaries. In particular, our construction technique easily yields watermarking PKE (where a decryption circuit is marked) schemes. We will provide the detail of them in a future version.
We also focus on watermarking PRFs with public marking in this paper. However, we can easily convert our PRFs into ones with private marking. See Remark 3.4 for the detail.
1.3 Technical Overview
Syntax of watermarking PRF.
We first review the syntax of watermarking PRF used in this work. A watermarking PRF scheme consists of five algorithms .444In this paper, standard math font stands for classical algorithms, and calligraphic font stands for quantum algorithms. outputs a public parameter and an extraction key . is given and outputs a PRF key and a public tag . is the PRF evaluation algorithm that takes as an input and in the domain and outputs . By using , we can generate a marked evaluation circuit that has embedded message and can be used to evaluate for almost all . Finally, is the extraction algorithm supposed to extract the embedded message from a pirated quantum evaluation circuit generated from the marked evaluation circuit. By default, in this work, we consider the public marking setting, where anyone can execute . Thus, takes as an input. On the other hand, we consider both the private extraction and the public extraction settings. Thus, the extraction key used by is kept secret by an authority in the private extraction setting and made public in the public extraction setting.
In this work, we allow to take the public tag generated with the original PRF key corresponding to the pirate circuit. In reality, we execute for a software when a user claims that the software is illegally generated by using her/his PRF key. Thus, it is natural to expect we can use a user’s public tag for extraction. Moreover, pirate circuits are distinguishers, not predictors in this work. As discussed by Goyal et al. [GKWW21], security against pirate distinguishers is much preferable compared to security against pirate predictors considered in many previous works on watermarking. In this case, it seems that such additional information fed to is unavoidable. For a more detailed discussion on the syntax, see the discussion in Section 3.1.
It is also natural to focus on distinguishers breaking weak pseudorandomness of PRFs when we consider pirate distinguishers instead of pirate predictors. Goyal et al. [GKWW21] already discussed this point. Thus, we focus on watermarking weak PRF in this work.
Definition of unremovability against quantum adversaries.
We say that a watermarking PRF scheme satisfies unremovability if given a marked evaluation circuit that has an embedded message , any adversary cannot generate a circuit such that it is a “good enough circuit”, but the extraction algorithm fails to output . In this work, we basically follow the notion of “good enough circuit” defined by Goyal et al. [GKWW21] as stated above. Let be the following distribution for a PRF .
- :
-
Generate , , and . Compute . Output .
A circuit is defined as good enough circuit with respect to if given output by , it can correctly guess with probability significantly greater than . In other words, a circuit is defined as good enough if the circuit breaks weak PRF security.
Below, for a distribution whose output is of the form , let be binary positive operator valued measures (POVMs) that represents generating random from and testing if a quantum circuit can guess from . Then, for a quantum state , the overall distinguishing advantage of it for the above distribution is . Thus, a natural adaptation of the above notion of goodness for quantum circuits might be to define a quantum state as good if is significantly greater than . However, this notion of goodness for quantum circuits is not really meaningful. The biggest issue is that it does not consider the stateful nature of quantum programs.
This issue was previously addressed by Zhandry [Zha20] in the context of traitor tracing against quantum adversaries. In the context of classical traitor tracing or watermarking, we can assume that a pirate circuit is stateless, or can be rewound to its original state. This assumption is reasonable. If we have the software description of the pirate circuit, such a rewinding is trivial. Even if we have a hardware box in which a pirate circuit is built, it seems that such a rewinding is possible by hard reboot or cutting power. On the other hand, in the context of quantum watermarking, we have to consider that a pirate circuit is inherently stateful since it is described as a quantum state. Operations to a quantum state can alter the state, and in general, it is impossible to rewind the state into its original state. Regarding the definition of good quantum circuits above, if we can somehow compute the average success probability of the quantum state , the process can change or destroy the quantum state . Namely, even if we once confirm that the quantum state is good by computing , we cannot know the success probability of the quantum state even right after the computation. Clearly, the above notion of goodness is not the right notion, and we need one that captures the stateful nature of quantum programs.
In the work on traitor tracing against quantum adversaries, Zhandry [Zha20] proposed a notion of goodness for quantum programs that solves the above issue. We adopt it. For the above POVMs , let be the projective measurement that projects a state onto the eigenspaces of , where each is an eigenvalue of . is called projective implementation of and denoted as . Zhandry showed that the following process has the same output distribution as :
-
1.
Apply the projective measurement and obtain .
-
2.
Output with probability and output with probability .
Intuitively, project a state to an eigenvector of with eigenvalue , which can be seen as a quantum state with success probability . Using , Zhandry defined that a quantum circuit is if the outcome of the measurement is significantly greater than . The notion of is a natural extension of the classical goodness since it collapses to the classical goodness for a classical decoder. Moreover, we can ensure that a quantum state that is tested as still has a high success probability. On the other hand, the above notion of goodness cannot say anything about the post-tested quantum state’s success probability even if the test is passed. In this work, we use the notion of quantum circuits as the notion of good quantum circuits.
Difficulty of quantum watermarking PRF.
From the above discussion, our goal is to construct a watermarking PRF scheme that guarantees that we can extract the embedded message correctly if a pirated quantum circuit is . In watermarking PRF schemes, we usually extract an embedded message by applying several tests on success probability to a pirate circuit. When a pirate circuit is a quantum state, the set of tests that we can apply is highly limited compared to a classical circuit due to the stateful nature of quantum states.
One set of tests we can apply without destroying the quantum state is for distributions that are indistinguishable from from the view of the pirate circuit.555In the actual extraction process, we use an approximation of projective implementation introduced by Zhandry [Zha20] since applying a projective implementation is inefficient. In this overview, we ignore this issue for simplicity. We denote this set as . Zhandry showed that if distributions and are indistinguishable, the outcome of is close to that of . By combining this property with the projective property of projective implementations, as long as the initial quantum state is and we apply only tests contained in , the quantum state remains . On the other hand, if we apply a test outside of , the quantum state might be irreversibly altered. This fact is a problem since the set only is not sufficient to implement the existing widely used construction method for watermarking PRF schemes.
To see this, we briefly review the method. In watermarking PRF schemes, the number of possible embedded messages is super-polynomial, and thus we basically need to extract an embedded message in a bit-by-bit manner. In the method, such a bit-by-bit extraction is done as follows. For every , we define two distributions and whose output is of the form as above. Then, we design a marked circuit with embedded message so that it can be used to guess from with probability significantly greater than only for (resp. ) if (resp. ). The extraction algorithm can extract -th bit of the message by checking for which distributions of and a pirate circuit has a high distinguishing advantage.
As stated above, we cannot use this standard method to extract a message from quantum pirate circuits. The reason is that and are typically distinguishable. This implies that at least either one of or is not contained in . Since the test outside of might destroy the quantum state, we might not be able to perform the process for all , and fail to extract the entire bits of the embedded message.
It seems that to perform the bit-by-bit extraction for a quantum state, we need to extend the set of applicable tests and come up with a new extraction method.
Our solution: Use of reverse projective property.
We find that as another applicable set of tests, we have for distributions that are indistinguishable from , where is the following distribution.
- :
-
Generate , , and . Compute . Output .
We denote the set as . is the distribution the first bit of whose output is flipped from that of . Then, can be seen as POVMs that represents generating random from and testing if a quantum circuit cannot guess from . Thus, we see that . Recall that .
Let and be the distribution that generates and outputs . is a distribution contained in . Similarly to the relation between and , if , we have . Since , and share the same set of eigenvectors, and if a vector is an eigenvector of with eigenvalue , then it is also an eigenvector of with eigenvalue . Thus, if apply and successively to a quantum state and obtain the outcomes and , it holds that . We call this property the reverse projective property of the projective implementation.
Combining projective and reverse projective properties and the outcome closeness for indistinguishable distributions of the projective implementation, we see that the following key fact holds.
- Key fact:
-
As long as the initial quantum state is and we apply tests contained in or , the quantum state remains . Moreover, if the outcome of applying to the initial state is , we get the outcome close to every time we apply a test in , and we get the outcome close to every time we apply a test in .
In this work, we perform bit-by-bit extraction of embedded messages by using the above key fact of the projective implementation. To this end, we introduce the new notion of extraction-less watermarking PRF as an intermediate primitive.
Via extraction-less watermarking PRF.
An extraction-less watermarking PRF scheme has almost the same syntax as a watermarking PRF scheme, except that it does not have an extraction algorithm and instead has a simulation algorithm . is given the extraction key , the public tag , and an index , and outputs a tuple of the form . simulates outputs of or for a pirate circuit depending on the message embedded to the marked circuit corresponding to the pirate circuit. More concretely, we require that from the view of the pirate circuit generated from a marked circuit with embedded message , outputs of are indistinguishable from those of if and are indistinguishable from those of if for every . We call this security notion simulatability for mark-dependent distributions (SIM-MDD security).
By using an extraction-less watermarking PRF scheme , we construct a watermarking PRF scheme against quantum adversaries as follows. We use of as of , respectively. We explain how to construct the extraction algorithm of using of . For every , we define as the distribution that outputs randomly generated . Given , , and a quantum state , extracts the embedded message in the bit-by-bit manner by repeating the following process for every .
-
•
Apply to and obtain the outcome , where and is the state after the -th loop for every .
-
•
Set if and otherwise .
The extracted message is set to .
We show that the above construction satisfies unremovability. Suppose an adversary is given marked circuit and generates a quantum state , where and . Suppose also that is . This assumption means that the outcome of applying to is , where is an inverse polynomial. For every , from the SIM-MDD security of , is indistinguishable from if and is indistinguishable from if . This means that if and if . Then, from the above key fact of the projective implementation, it holds that is close to if and is close to if . Therefore, we see that correctly extract from . This means that satisfies unremovability.
The above definition, construction, and security analysis are simplified and ignore many subtleties. The most significant point is that we use approximated projective implementations introduced by Zhandry [Zha20] instead of projective implementations in the actual construction since applying a projective implementation is an inefficient process. Moreover, though the outcomes of (approximate) projective implementations for indistinguishable distributions are close, in the actual analysis, we have to take into account that the outcomes gradually change every time we apply an (approximate) projective implementation. These issues can be solved by doing careful parameter settings.
Comparison with the work by Zhandry [Zha20].
Some readers familiar with Zhandry’s work [Zha20] might think that our technique contradicts the lesson from Zhandry’s work since it essentially says that once we find a large gap in success probabilities, the tested quantum pirate circuit might self-destruct. However, this is not the case. What Zhandry’s work really showed is the following. Once a quantum pirate circuit itself detects that there is a large gap in success probabilities, it might self-destruct. Even if an extractor finds a large gap in success probabilities, if the tested quantum pirate circuit itself cannot detect the large gap, the pirate circuit cannot self-destruct. In Zhandry’s work, whenever an extractor finds a large gap, the tested pirate circuit also detects the large gap. In our work, the tested pirate circuit cannot detect a large gap throughout the extraction process while an extractor can find it.
The reason why a pirate circuit cannot detect a large gap in our scheme even if an extractor can find it is as follows. Recall that in the above extraction process of our scheme based on an extraction-less watermarking PRF scheme, we apply to the tested pirate circuit for every . Each outputs a tuple of the form and is indistinguishable from or depending on the embedded message. In the process, we apply for every , and we get the success probability if is indistinguishable from and we get if is indistinguishable from . The tested pirate circuit needs to know which of or is indistinguishable from the distribution behind the projective implementation to know which of or is the result of an application of a projective implementation. However, this is impossible. The tested pirate circuit receives only part of ’s output and not part. (Recall that the task of the pirate circuit is to guess from .) The only difference between and is that the first-bit is flipped. Thus, if the part is dropped, is, in fact, indistinguishable from both and . As a result, the pirate program cannot know which of or is the result of an application of a projective implementation. In other words, the pirate circuit cannot detect a large gap in our extraction process.
Instantiating extraction-less watermarking PRF.
In the rest of this overview, we will explain how to realize extraction-less watermarking PRF.
We consider the following two settings similar to the ordinary watermarking PRF. Recall that we consider the public marking setting by default.
- Private-simulatable:
-
In this setting, the extraction key fed into is kept secret. We require that SIM-MDD security hold under the existence of the simulation oracle that is given a public tag and an index and returns . An extraction-less watermarking PRF scheme in this setting yields a watermarking PRF scheme against quantum adversaries in private-extractable setting where unremovability holds for adversaries who can access the extraction oracle.
- Public-simulatable:
-
In this setting, the extraction key is publicly available. An extraction-less watermarking PRF scheme in this setting yields a watermarking PRF scheme against quantum adversaries in the public-extractable setting.
We provide a construction in the first setting using private constrained PRF based on the hardness of the LWE assumption. Also, we provide a construction in the second setting based on IO and the hardness of the LWE assumption.
To give a high-level idea behind the above constructions, in this overview, we show how to construct a public-simulatable extraction-less watermarking PRF in the token-based setting [CHN+18]. In the token-based setting, we treat a marked circuit as a tamper-proof hardware token that an adversary can only access in a black-box way.
Before showing the actual construction, we explain the high-level idea. Recall that SIM-MDD security requires that an adversary who is given cannot distinguish from an output of if and from that of if . This is the same as requiring that cannot distinguish from that of the following distribution . We can check that is identical with if and with if .
- :
-
Generate and . Then, if , generate , and otherwise, compute . Output .
Essentially, the only attack that can perform is to feed contained in the given tuple to and compares the result with , if we ensure that , are pseudorandom. In order to make the construction immune to this attack, letting and , we have to design and so that
-
•
If , outputs a value different from .
-
•
If , outputs .
We achieve these conditions as follows. First, we set output by so that and is random values and is an encryption of by a public-key encryption scheme with pseudorandom ciphertext property, where the encryption key is included in . Then, we set as a token such that it has the message and the decryption key corresponding to hardwired, and it outputs if the input is decryptable and holds for the decrypted , and otherwise behaves as . The actual construction is as follows.
Let be a PRF family consisting of functions , where is the security parameter and is sufficiently large. Let be a CCA secure public-key encryption scheme satisfying pseudorandom ciphertext property. Using these ingredients, We construct an extraction-less watermarking PRF scheme as follows.
- :
-
In this construction, and .
- :
-
It generates a fresh PRF key of and a key pair . The PRF key is and the corresponding public tag is .
- :
-
It simply outputs .
- :
-
It generates the following taken .
Hard-Coded Constants: . Input: . 1. Try to decrypt with , , and . 2. If decryption succeeds, output if and otherwise. 3. Otherwise, output .
- :
-
It first generates and . Then, it parses and generates . Finally, it outputs .
We check that satisfies SIM-MDD security. For simplicity, we fix the message embedded into the challenge PRF key. Then, for any adversary and , SIM-MDD security requires that given and , cannot distinguish from an output of if and is indistinguishable from if .
We consider the case of . We can finish the security analysis by considering the following sequence of mutually indistinguishable hybrid games, where is given in the first game, and on the other hand, is given in the last game. We first change the game so that is generated as a uniformly random value instead of by using the pseudorandom ciphertext property under CCA of . This is possible since the CCA oracle can simulate access to the marked token by . Then, we further change the security game so that if , is generated as instead of a uniformly random value by using the pseudorandomness of . Note that if , remains uniformly at random. We see that if , the token never evaluate since . Thus, this change is possible. We see that now the distribution of is exactly the same as that output by . Similarly, in the case of , we can show that an output of is indistinguishable from that output by . The only difference is that in the final step, we change the security game so that is generated as if .
In the actual public-simulatable construction, we implement this idea using iO and puncturable encryption [CHN+18] instead of token and CCA secure public-key encryption. Also, in the actual secret-simulatable construction, we basically follow the same idea using private constrained PRF and secret-key encryption.
1.4 More on Related Work
Watermarking against classical adversaries.
Cohen et al. [CHN+18] present a publicly extractable watermarking PRF from IO and injective OWFs. It is unremovable against adversaries who can access the mark oracle only before a target marked circuit is given. The mark oracle returns a marked circuit for a queried arbitrary polynomial-size circuit. Suppose we additionally assume the hardness of the decisional Diffie-Hellman or LWE problem. In that case, their watermarking PRF is unremovable against adversaries that can access the mark oracle before and after a target marked circuit is given. However, adversaries can query a valid PRF key to the mark oracle in that case. They also present definitions and constructions of watermarking for public-key cryptographic primitives.
Boneh, Lewi, and Wu [BLW17] present a privately extractable watermarking PRF from privately programmable PRFs, which are variants of private constrained PRFs [BLW17, CC17]. It is unremovable in the presence of the mark oracle. However, it is not secure in the presence of the extraction oracle and does not support public marking. They instantiate a privately programmable PRF with IO and OWFs, but later, Peikert and Shiehian [PS18] instantiate it with the LWE assumption.
Kim and Wu [KW21] (KW17), Quach, Wichs, and Zirdelis [QWZ18] (QWZ), and Kim and Wu [KW19] (KW19) present privately extractable watermarking PRFs from the LWE assumption. They are secure in the presence of the mark oracle. KW17 construction is not secure in the presence of the extraction oracle and does not support public marking. QWZ construction is unremovable in the presence of the extraction oracle and supports public marking. However, it does not have pseudorandomness against an authority that generates a marking and extraction key. KW19 construction is unremovable in the presence of the extraction oracle and has some restricted pseudorandomness against an authority (see the reference [KW19] for the detail). However, it does not support public marking.666Their construction supports public marking in the random oracle model.
Yang et al. [YAL+19] present a collusion-resistant watermarking PRF from IO and the LWE assumption. Collusion-resistant watermarking means unremovability holds even if adversaries receive multiple marked circuits with different embedded messages generated from one target circuit.
Goyal, Kim, Manohar, Waters, and Wu [GKM+19] improve the definitions of watermarking for public-key cryptographic primitives and present constructions. In particular, they introduce collusion-resistant watermarking and more realistic attack strategies for public-key cryptographic primitives. Nishimaki [Nis20] present a general method for equipping many existing public-key cryptographic schemes with the watermarking functionality.
Goyal, Kim, Waters, and Wu [GKWW21] introduce the notion of traceable PRFs, where we can identify a user that creates a pirate copy of her/his authenticated PRF. The difference between traceable PRF and (collusion-resistant) watermarking PRF is that there is only one target original PRF and multiple authenticated copies of it with different identities in traceable PRF. In (collusion-resistant) watermarking PRF, we consider many different PRF keys. In addition, Goyal et al. introduce a refined attack model. Adversaries in previous watermarking PRF definitions output a pirate PRF circuit that correctly computes the original PRF values for fraction of inputs. However, adversaries in traceable PRFs output a pirate circuit that distinguishes whether an input pair consists of a random input and a real PRF value or a random input and output value. This definition captures wide range of attacks. For example, it captures adversaries who create a pirate PRF circuit that can compute the first quarter bits of the original PRF output. Such an attack is not considered in previous watermarking PRFs. We adopt the refined attack model in our definitions.
Learning information from adversarial entities in the quantum setting.
Zhandry [Zha20] introduces the definition of secure traitor tracing against quantum adversaries. In traitor tracing, each legitimate user receives a secret key that can decrypt broadcasted ciphertexts and where identity information is embedded. An adversary outputs a pirate decoder that can distinguish whether an input is a ciphertext of or where and are adversarially chosen plaintexts. A tracing algorithm must identify a malicious user’s identity such that its secret decryption key is embedded in the pirate decoder. Thus, we need to extract information from adversarially generated objects. Such a situation also appears in security proofs of interactive proof systems [Wat09, Unr12, ARU14, CMSZ21] (but not in real cryptographic algorithms) since we rewind a verifier.
Zhandry presents how to estimate the success decryption probability of a quantum pirate decoder without destroying the decoding (distinguishing) capability. He achieves a quantum tracing algorithm that extracts a malicious user identity by combining the probability approximation technique above with PLBE [BSW06]. However, his technique is limited to the setting where user identity spaces are only polynomially large while there are several traitor tracing schemes with exponentially large identity spaces [NWZ16, GKW19]. As observed in previous works [GKM+19, Nis20, GKWW21], traitor tracing and watermarking have similarities since an adversary outputs a pirate circuit in the watermarking setting and an extraction algorithm tries to retrieve information from it. However, a notable difference is that we must consider exponentially large message spaces by default in the (message-embedding) watermarking setting.
Application of (classical) watermarking.
As we explained above, Aaronson et al. [ALL+21], and Kitagawa et al. [KNY21] achieve secure software leasing schemes by using watermarking. A leased software consists of a quantum state and watermarked circuit. Although they use watermarking schemes in the quantum setting, it is sufficient for their purpose to use secure watermarking against adversaries that output a classical pirate circuit. This is because a returned software is verified by a checking algorithm and must have a specific format in secure software leasing.777A valid software must run on a legitimate platform. For example, a video game title of Xbox must run on Xbox. That is, a returned software is rejected if it does not have a classical circuit part that can be tested by an extraction algorithm of the building block watermarking.
2 Preliminaries
Notations and conventions.
In this paper, standard math or sans serif font stands for classical algorithms (e.g., or ) and classical variables (e.g., or ). Calligraphic font stands for quantum algorithms (e.g., ) and calligraphic font and/or the bracket notation for (mixed) quantum states (e.g., or ). For strings and , denotes the concatenation of and . Let denote the set of integers , denote a security parameter, and denote that is set, defined, or substituted by .
In this paper, for a finite set and a distribution , denotes selecting an element from uniformly at random, denotes sampling an element according to . Let and denote assigning to the output of a probabilistic or deterministic algorithm and a quantum algorithm on an input and , respectively. When we explicitly show that uses randomness , we write . PPT and QPT algorithms stand for probabilistic polynomial time algorithms and polynomial time quantum algorithms, respectively. Let denote a negligible function.
2.1 Quantum Information
Let be a finite-dimensional complex Hilbert space. A (pure) quantum state is a vector . Let be the space of Hermitian operators on . A density matrix is a Hermitian operator with , which is a probabilistic mixture of pure states. A quantum state over is called qubit, which can be represented by the linear combination of the standard basis . More generally, a quantum system over is called an -qubit quantum system for .
A Hilbert space is divided into registers . We sometimes write to emphasize that the operator acts on register .888The superscript parts are gray colored. When we apply to registers and , is identified with .
A unitary operation is represented by a complex matrix such that . The operation transforms and into and , respectively. A projector is a Hermitian operator () such that .
For a quantum state over two registers and , we denote the state in as , where is a partial trace of (trace out ).
Given a function , a quantum-accessible oracle of is modeled by a unitary transformation operating on two registers and , in which is mapped to , where denotes XOR group operation on . We write to denote that the algorithm ’s oracle is a quantum-accessible oracle.
Definition 2.1 (Quantum Program with Classical Inputs and Outputs [ALL+21]).
A quantum program with classical inputs is a pair of quantum state and unitaries where is the domain, such that the state of the program evaluated on input is equal to . We measure the first register of to obtain an output. We say that has a compact classical description when applying can be efficiently computed given and .
Definition 2.2 (Positive Operator-Valued Measure).
Let be a finite index set. A positive operator valued measure (POVM) is a collection of Hermitian positive semi-define matrices such that . When we apply POVM to a quantum state , the measurement outcome is with probability . We denote by the distribution obtained by applying to .
Definition 2.3 (Quantum Measurement).
A quantum measurement is a collection of matrices such that . When we apply to a quantum state , the measurement outcome is with probability . Conditioned on the outcome being , the post-measurement state is .
We can construct a POVM from any quantum measurement by setting . We say that is an implementation of . The implementation of a POVM may not be unique.
Definition 2.4 (Projective Measurement/POVM).
A quantum measurement is projective if for all , is a projector. This implies that for distinct . In particular, two-outcome projective measurement is called a binary projective measurement, and is written as , where is associated with the outcome , and with the outcome . Similarly, a POVM is projective if for all , is a projector. This also implies that for distinct .
Definition 2.5 (Controlled Projection).
Let be a collection of projective measurement over a Hilbert space , where for . Let be a distribution whose randomness space is . The controlled projection is defined as follows.999We use superscript to denote that it is associated with the outcome here.
(1) |
2.2 Measurement Implementation
Definition 2.6 (Projective Implementation).
Let:
-
•
be a binary outcome POVM
-
•
be a finite set of distributions over outcomes
-
•
be a projective measurement with index set .
We define the following measurement.
-
1.
Measure under the projective measurement and obtain a distribution over .
-
2.
Output a bit sampled from the distribution .
We say this measurement is a projective implementation of , denoted by if it is equivalent to .
Theorem 2.7 ([Zha20, Lemma 1]).
Any binary outcome POVM has a projective implementation .
Definition 2.8 (Shift Distance).
For two distributions , the shift distance with parameter , denoted by , is the smallest quantity such that for all :
(2) | |||||
(3) |
For two real-valued measurements and over the same quantum system, the shift distance between and with parameter is
Definition 2.9 (-Almost Projective [Zha20]).
A real-valued quantum measurement is -almost projective if the following holds. For any quantum state , we apply twice in a row to and obtain measurement outcomes and , respectively. Then, .
Theorem 2.10 ([Zha20, Theorem 2]).
Let be any probability distribution and be a collection of projective measurements. For any , there exists an algorithm of measurement that satisfies the following.
-
•
.
-
•
is -almost projective.
-
•
The expected running time of is where is the combined running time of , the procedure mapping , and the running time of measurement .
Theorem 2.11 ([Zha20, Corollary 1]).
Let be an efficiently constructible, potentially mixed state, and efficiently sampleable distributions. If and are computationally indistinguishable, for any inverse polynomial and any function , we have .
Note that the indistinguishability of and needs to hold against distinguishers who can construct in the theorem above. However, this fact is not explicitly stated in [Zha20]. We need to care about this condition if we need secret information to construct , and the secret information is also needed to sample an output from or . We handle such a situation when analyzing the unremovability of our privately extractable watermarking PRF. In that situation, we need a secret extraction key to construct and sample an output from and .
We also define the notion of the reverse almost projective property of API.
Definition 2.12 (-Reverse Almost Projective).
Let be a collection of binary outcome projective measurements. Let be a distribution. We also let . We say is -reverse almost projective if the following holds. For any quantum state , we apply and in a row to and obtain measurement outcomes and , respectively. Then, .
We show that the measurement algorithm in Theorem 2.10 also satisfies Definition 2.12. First, we describe the detail of in Figure 1. uses an ancilla register besides the original Hilbert space . Let be the randomness space of distribution . We define where
Algorithm Parameter: Collection of projective measurement , distribution , real values . Input: A quantum state . 1. Initialize a state . 2. Initialize a classical list . 3. Repeat the following . (a) Apply to register . Let be the measurement outcome and set . (b) Apply to register . Let be the measurement outcome and set . 4. Let be the number of index such that in the list , and . 5. If , repeat the loop again until . 6. Discard register, and output .
We use the following lemma to analyze .
Lemma 2.13 ([Jor75]).
For any two Hermitian projectors and on a Hilbelt space , there exists an orthogonal decomposition of into one-dimensional and two-dimensional subspaces (the Jordan subspaces) that are invariant under both and . Moreover:
-
•
in each one-dimensional space, and act as identity or rank-zero projectors; and
-
•
in each two-dimensional subspace , and are rank-one projectors: there exists such that projects onto and projects onto .
For each two-dimensional subspace , we call the eigenvalue of the -th subspace. It is easy to see that is an eigenvector of the Hermitian matrix with eigenvalue .
As previous works observed [MW05, Reg, CMSZ21], we obtain the following by Lemma 2.13. There exists orthogonal vectors that span , such that and . Similarly, and . By setting appropriate phases, we have
(4) | ||||||
(5) |
where such that . We also have
(6) |
Theorem 2.14.
in Figure 1 is -reverse almost projective.
Proof of Theorem 2.14.
To analyze , we set , , and apply Lemma 2.13. Then, we have the following relationships:
(7) | |||||||
(8) | |||||||
(9) | |||||||
, | (10) |
where and are decompositions of and , and .
Suppose we apply to a on . We can write the initial state in Figure 1 as since is a basis. We also have that since projects onto in each decomposed subspace . It is easy to see that . Thus, for all , . Therefore, for any , we can write . As we see in Figure 1, when we run , the initial state is and we apply and alternately. Therefore, the quantum state in each decomposed subspace changes as in Figure 2 when we run .
Next, suppose we apply to the quantum state immediately after applying to . ensures that the final measurement is and its result is . This means that the state going into the main loop of (the third item in Figure 1) is identical to the state before is discarded at the application of to . By the definition of , we have for . Thus, the quantum state in each decomposed subspace changes as in Figure 3 when we apply .
From the above discussions, we can view a successive execution of and to as the following single experiment.
-
•
Sample from with the probability .
-
•
Flip biased random coins whose probability of outputting is .
-
•
Flip an even number of additional random coins until is found.
-
•
Flip biased random coins whose probability of outputting is .
-
•
Let be the overall list of coin flips.
Let and be the outcome of and , respectively. is the fraction of ’s in the first bits of . Also, is the fraction of ’s in the last bits of . Then, we have
(11) | ||||
(12) |
It is easy to see that Equation 12 is equivalent to due to . Therefore, by combining it with Equation 11, we obtain
(13) |
This completes the proof.
2.3 Cryptographic Tools
Definition 2.15 (Learning with Errors).
Let be integer functions of the security parameter . Let be an error distribution over . The LWE problem is to distinguish the following two distributions.
When we say we assume the quantum hardness of the LWE problem or the QLWE assumption holds, we assume that for any QPT adversary , it holds that
Definition 2.16 (Pseudorandom Generator).
A pseudorandom generator (PRG) with stretch ( is some polynomial function) is a polynomial-time computable function that satisfies the following. For any QPT adversary , it holds that
where denotes the uniform distribution over .
Theorem 2.17 ([HILL99]).
If there exists a OWF, there exists a PRG.
Definition 2.18 (Quantum-Accessible Pseudo-Random Function).
Let be a family of polynomially computable functions, where and are some polynomials of . We say that is a quantum-accessible pseudo-random function (QPRF) family if for any QPT adversary , it holds that
(14) |
where is the set of all functions from to .
Theorem 2.19 ([Zha12a]).
If there exists a OWF, there exists a QPRF.
Definition 2.20 (Puncturable PRF).
A puncturable PRF (PPRF) is a tuple of algorithms where is a PRF family and satisfies the following two conditions. Note that and are polynomials of .
- Punctured correctness:
-
For any polynomial-size set and any , it holds that
(15) - Pseudorandom at punctured point:
-
For any polynomial-size set and any QPT distinguisher , it holds that
(16) where , and denotes the uniform distribution over .
If (i.e., puncturing a single point), we simply write instead of .
It is easy to see that the Goldwasser-Goldreich-Micali tree-based construction of PRFs (GGM PRF) [GGM86] from one-way function yield puncturable PRFs where the size of the punctured key grows polynomially with the size of the set being punctured [BW13, BGI14, KPTZ13]. Thus, we have:
Theorem 2.21 ([GGM86, BW13, BGI14, KPTZ13]).
If OWFs exist, then for any polynomials and , there exists a PPRF that maps -bits to -bits.
Definition 2.22 (SKE).
An SKE scheme with plaintext space and ciphertext space , where for some , is a tuple of three algorithms.
- :
-
The key generation algorithm takes as input the security parameter , and outputs an encryption key .
- :
-
The encryption algorithm takes as input and a plaintext , and outputs a ciphertext .
- :
-
The decryption algorithm takes as input and , and outputs a plaintext .
- Correctness:
-
An SKE scheme is correct if for all and ,
- Sparseness:
-
In this work, we also require that most strings are not valid ciphertexts under a randomly generated key of an SKE scheme:
(17)
Definition 2.23 (Ciphertext Pseudorandomness for SKE).
An SKE scheme satisfies ciphertext pseudorandomness if for any (stateful) QPT , it holds that
Theorem 2.24.
If OWFs exist, there exists an SKE scheme with sparseness and ciphertext pseudorandomness.
The well-known PRF-based SKE satisfies ciphertext pseudorandomness. However, we need padding for sparseness. That is, a ciphertext is where is randomness of encryption, is a PRF key, is a PRF, and . We check that the first bits of equals to . If is sufficiently long, the scheme has sparseness.
Definition 2.25 (Constrained PRF (Syntax)).
A constrained PRF (CPRF) with domain , range , and constraint family where is a tuple of four algorithms.
- :
-
The setup algorithm takes as input the security parameter and a constraint-family parameter , and outputs a master PRF key .
- :
-
The constrain algorithm takes as input and a constraint , and outputs a constrained key .
- :
-
The evaluation algorithm takes as input and an input , and outputs a value .
- :
-
The constrained evaluation algorithm takes as input and , and outputs a value .
Definition 2.26 (Security for CPRF).
A private CPRF should satisfy correctness, pseudorandomness, and privacy.
- Correctness:
-
A CPRF is correct if for any (stateful) QPT adversary , it holds that
- Selective single-key pseudorandomness:
-
A CPRF is selectively single-key pseudorandom if for any (stateful) QPT adversary , it hods that
where is the sets of queries to .
- Selective single-key privacy:
-
A CPRF is selectively single-key private if for any (stateful) QPT adversary , there exists a stateful PPT simulator that satisfying that
where and .
We say that a CPRF is a selectively single-key private CPRF if it satisfies correctness, selective single-key pseudorandomness, and selective single-key privacy.
Theorem 2.27 ([BTVW17, PS18]).
If the QLWE assumption holds, there exists a selectively signle-key private CPRF for polynomial-size classical circuits.
Definition 2.28 (PKE).
A PKE with plaintext space , ciphertext space is a tuple of three algorithms.
- :
-
The key generation algorithm takes as input the security parameter and outputs a key pair .
- :
-
The encryption algorithm takes as input , a plaintext , and outputs a ciphertext .
- :
-
The decryption algorithm takes as input and , and outputs a plaintext or .
- Correctness:
-
A PKE scheme is correct if for all and , it holds that
Definition 2.29 (CCA Security for PKE).
A PKE scheme is CCA secure if for any (stateful) QPT adversary , it holds that
where is the set of queries to after is given .
Theorem 2.30 ([Pei09]).
If the QLWE assumption holds, there exists a PKE scheme that satisfies CCA security.
Definition 2.31 (Indistinguishability Obfuscator [BGI+12]).
A PPT algorithm is a secure IO for a classical circuit class if it satisfies the following two conditions.
- Functionality:
-
For any security parameter , circuit , and input , we have that
(18) - Indistinguishability:
-
For any PPT and QPT distinguisher , the following holds:
If , then we have
(19) (20)
3 Definition of Quantum Watermarking
We introduce definitions for watermarking PRFs against quantum adversaries in this section.
3.1 Syntax and Pseudorandomness
Definition 3.1 (Watermarking PRF).
A watermarking PRF for the message space with domain and range is a tuple of five algorithms .
- :
-
The setup algorithm takes as input the security parameter and outputs a public parameter and an extraction key .
- :
-
The key generation algorithm takes as input the public parameter and outputs a PRF key and a public tag .
- :
-
The evaluation algorithm takes as input a PRF key and an input and outputs .
- :
-
The mark algorithm takes as input the public parameter , a PRF key , and a message , and outputs a marked evaluation circuit .
- :
-
The extraction algorithm takes as input an extraction key , a tag , a quantum circuit with classical inputs and outputs , and a parameter , and outputs where .
- Evaluation Correctness:
-
For any message , it holds that
Remark 3.2 (On extraction correctness).
Usually, a watermarking PRF scheme is required to satisfy extraction correctness that ensures that we can correctly extract the embedded mark from an honestly marked circuit. However, as observed by Quach et al. [QWZ18], if we require the extraction correctness to hold for a randomly chosen PRF key, it is implied by unremovability defined below. Note that the unremovability defined below considers a distinguisher as a pirate circuit. However, it implies the extraction correctness since we can easily transform an honestly marked circuit into a successful distinguisher. Thus, we do not explicitly require a watermarking PRF scheme to satisfy extraction correctness in this work.
Remark 3.3 (On public marking).
We consider only watermarking PRFs with public marking as in Definition 3.1 since we can achieve public marking by default. The reason is as follows. Suppose that we generate , , and a marking key at the setup. When we generate a PRF key and a public tag at , we can first generate from scratch (ignoring the original ) and set a PRF key and a public tag where . That is, anyone can generate a marked circuit from by . Therefore, we consider public marking by default in our model.
Remark 3.4 (On private marking).
We might prefer private marking in some settings since we might want to prevent adversaries from forging a watermarked PRF. We can convert watermarking PRFs in Definition 3.1 into ones with private marking by using signatures. Below, we assume that a PRF key includes its public tag since it does not harm security. At the setup phase, we also generate a signature key pair and set a mark key and an extraction key . To embed a message into , we generate a signature and generate . To extract a message, we run , parse , and run . If the verification result is , we output . This conversion is the same as what Goyal et al. [GKM+19] proposed. Adversaries cannot forge a signature for by the unforgeability of . Intuitively, if an adversary can forge a watermarked PRF whose functionality is different from those of watermarked PRFs given from a mark oracle, should hold since public tags are related to PRF keys. This breaks the unforgeability of . Thus, we expect that adversaries cannot break the unforgeability of watermarking. However, we do not formally define watermarking unforgeability against quantum adversaries since it is not a scope of this work. We leave it as future work.
Discussion on syntax.
Definition 3.1 is a natural quantum variant of classical watermarking PRFs except that the key generation algorithm outputs a public tag , and the extraction algorithm uses it. Such a public tag is not used in previous works on watermarking PRFs [CHN+18, KW21, QWZ18, KW19, YAL+19]. A public tag should not harm watermarking PRF security. We justify using as follows.
First, we need to obtain many pairs of input and output to extract an embedded message from a marked PRF in almost all known (classical) watermarking constructions [CHN+18, BLW17, KW21, QWZ18, KW19, YAL+19, GKM+19, Nis20]. This is because we must check whether a tested PRF circuit outputs particular values for particular inputs which depends on the target PRF (such particular inputs are known as marked points). Suppose marked points are fixed and do not depend on a PRF that will be marked. In that case, an adversary can easily remove an embedded message by destroying functionalities at the fixed marked points that could be revealed via a (non-target) marked PRF that an adversary generated. Recall that we consider the public marking setting. The attack was already observed by Cohen et al. [CHN+18].
Second, we consider a stronger adversary model than that in most previous works as the definition of traceable PRFs by Goyal et al. [GKWW21]. An adversary outputs a distinguisher-based pirate circuit in our security definition rather than a pirate circuit that computes an entire output of a PRF. This is a refined and realistic model, as Goyal et al. [GKWW21] argued (and we explain in Section 1.4). In this model, we cannot obtain a valid input-output pair from a pirate circuit anymore. Such a pair is typical information related to a target PRF. Goyal et al. resolve this issue by introducing a tracing key that is generated from a target PRF. Note that parameters of watermarking ( and ) should not be generated from a PRF since we consider many different PRF keys in the watermarking PRF setting.
Thus, if we would like to achieve an extraction algorithm and the stronger security notion simultaneously, an extraction algorithm should somehow take information related to a target PRF as input to correctly extract an embedded message. In the weaker adversary model, an extraction algorithm can easily obtain many valid input and output pairs by running a tested circuit many times. However, in the stronger distinguisher-based pirate circuit model, a pirate circuit outputs a single decision bit.
To resolve this issue, we introduce public tags. We think it is natural to have information related to the original PRF key in an extraction algorithm. In reality, we check a circuit when a user claims that her/his PRF key (PRF evaluation circuit) is illegally used. Thus, it is natural to expect we can use a user’s public tag for extraction. This setting resembles watermarking for public-key cryptographic primitives, where a user public key is available in an extraction algorithm. In addition, public tags do not harm PRF security in our constructions. It is unclear whether we can achieve unremovability in the stronger distinguisher-based model without any syntax change (even in the classical setting). 101010Even if we consider the weaker adversary model, the same issue appears in the quantum setting in the end. If we run a quantum circuit for an input and measure the output, the measurement could irreversibly alter the quantum state and we lost the functionality of the original quantum state. That is, there is no guarantee that we can correctly check whether a tested quantum circuit is marked or not after we obtain a single valid pair of input and output by running the circuit. However, as we explained above, we want to obtain information related to a target PRF for extraction. Thus, we need a public tag in the syntax in either case.
Extended pseudorandomness.
We consider extended weak pseudorandomness, where weak pseudorandomness holds even if the adversary generates . This notion is the counterpart of extended pseudorandomness by Quach et al. [QWZ18], where pseudorandomness holds in the presence of the extraction oracle. However, our pseudorandomness holds even against an authority unlike extended pseudorandomness by Quach et al. since we allow adversaries to generate a public parameter.
Definition 3.5 (Extended Weak Pseudorandomness against Authority).
To define extended weak pseudorandomness for watermarking PRFs, we define the game as follows.
-
1.
first sends to the challenger.
-
2.
The challenger generates and sends to .
-
3.
The challenger chooses . can access to the following oracles.
- :
-
When this is invoked (no input), it returns where and .
- :
-
When this is invoked (no input), it returns:
-
•
where and if ,
-
•
where and if .
This oracle is invoked only once.
-
•
-
4.
When terminates with output , the challenger outputs if and otherwise.
We say that is extended weak pseudorandom if for every QPT , we have
(21) |
3.2 Unremovability against Quantum Adversaries
We define unremovability for watermarking PRFs against quantum adversaries.
Definition 3.6 (Unremovability for private extraction).
We consider the public marking and secret extraction setting here. Let . We define the game as follows.
-
1.
The challenger generates and gives to the adversary . send to the challenger. The challenger generates , computes , and sends and to .
-
2.
can access to the following oracle.
- :
-
On input and a quantum circuit , it returns .
-
3.
Finally, the adversary outputs a “pirate” quantum circuit , where is a quantum program with classical inputs and outputs whose first register (i.e., output register) is and is a compact classical description of .
Let be the following distribution.
- :
-
Generate , , and . Compute . Output .
We also let be a collection of binary outcome projective measurements, where
(22) |
Moreover, we let be binary outcome POVMs, where
(23) |
- :
-
When applying the measurement to , we obtain a value such that .
- :
-
When Computing , it holds that .
- :
-
When Computing , it holds that .
We say that satisfies unremovability if for every and QPT , we have
(24) |
Intuitively, is a projective measurement that feeds to and checks whether the outcome is or not (and then uncomputes). Then, can be seen as POVMs that results in with the probability that can correctly guess from for generated randomly from .
Remark 3.7 (On attack model).
We check whether correctly distinguishes a real PRF value from a random value or not by applying to . This attack model follows the refined and more realistic attack model by Goyal et al. [GKWW21]. The adversary outputs a pirate circuit that computes an entire PRF value in all previous works except their work.
The distinguisher-based pirate circuit model is compatible with the (quantum) pirate decoder model of traitor tracing. Thus, our attack model also follows the attack model of quantum traitor tracing (the black box projection model) by Zhandry [Zha20, Section 4.2].111111In the watermarking setting, an extraction algorithm can take the description of a pirate circuit as input (corresponding to the software decoder model [Zha20, Section 4.2]), unlike the black-box tracing model of traitor tracing. However, we use a pirate circuit in the black box way for our extraction algorithms. Thus, we follow the black box projection model by Zhandry [Zha20].
As in the traitor tracing setting [Zha20], is inefficient in general. We can handle this issue as Zhandry did. We will use an approximate version of to achieve an efficient reduction. In addition, we cannot apply both and to simultaneously. However, the condition claims that an embedded mark cannot be removed as long as the pirate circuit is alive. This fits the spirit of watermarking. See Zhandry’s paper [Zha20, Section 4] for more discussion on the models.
Remark 3.8 (On selective message).
As we see in Definition 3.6, we consider the selective setting for private extraction case, where must send the target message to the challenger before accesses to the oracle and after is given. This is the same setting as that by Quach et al. [QWZ18]. We can consider the fully adaptive setting, where can send the target message after it accesses to the oracle , as Kim and Wu [KW19]. However, our privately extractable watermarking PRF satisfies only selective security. Thus, we write only the selective variant for the private extraction case.
Definition 3.9 (Unremovability for Public Extraction).
This is the same as Definition 3.6 except we use the game defined in the same way as except the following differences.
-
•
In item 1, is given together with .
-
•
Item 2 is removed.
4 Definition of Extraction-Less Watermarking
We introduce the notion of extraction-less watermarking PRF as an intermediate primitive towards watermarking PRFs secure against quantum adversaries.
4.1 Syntax and Pseudorandomness
Definition 4.1 (Extraction-Less Watermarking PRF).
An extraction-less watermarking PRF for the message space with domain and range is a tuple of five algorithms , where the first four algorithms have the same input/output behavior as those defined in Definition 3.1 and has the following input/output behavior.
- :
-
The simulation algorithm takes as input the extraction key , a tag , and an index , and outputs a tuple .
- Evaluation Correctness:
-
It is defined in exactly the same way as the evaluation correctness for watermarking PRF defined in Definition 3.1.
Extended pseudorandomness.
Extended pseudorandomness for extraction-less watermarking PRF is defined in exactly the same way as that for watermarking PRF, that is Definition 3.5.
4.2 Simulatability for Mark-Dependent Distributions (SIM-MDD Security)
We introduce the security notion for extraction-less watermarking PRF that we call simulatability for mark-dependent distributions. Let and be the following distributions.
- :
-
Generate , , and . Compute . Output .
- :
-
Generate . Output .
Namely, is the distribution that outputs a random value if the first bit and a PRF evaluation if the first bit , and is its opposite (i.e., a PRF evaluation if and a random value if ). SIM-MDD security is a security notion that guarantees that an adversary given cannot distinguish an output of from that of if and from that of if .
Definition 4.2 (SIM-MDD Security with Private Simulation).
To define SIM-MDD security with private simulation, we define the game as follows, where .
-
1.
The challenger generates and sends to . sends to the challenger. The challenger generates and computes . The challenger sends and to .
-
2.
can access to the following oracle.
- :
-
On input and , it returns .
-
3.
Let be the following distribution. Note that is identical with if and with if .
- :
-
Generate and . Then, if , generate , and otherwise, compute . Output .
The challenger generates . If , the challenger samples . If , the challenger generates . The challenger sends to .
-
4.
When terminates with output , the challenger outputs if and otherwise.
Note that is not allowed to access to after is given .
We say that is SIM-MDD secure if for every and QPT , we have
(25) |
We consider the selective setting above as unremovability for private extraction in Definition 3.6 since we use SIM-MDD security with private simulation to achieve unremovability for private simulation.
Remark 4.3 (On multi challenge security).
We can prove that the above definition implies the multi-challenge variant where polynomially many outputs of are required to be indistinguishable from those of . This is done by hybrid arguments where outputs of are simulated using and those of are simulated using . To apply Theorem 2.11, we need the multi challenge variant. However, we consider the single challenge variant due to the implication above. A similar remark is applied to the variants of SIM-MDD security introduced below.
SIM-MDD security with private simulation under the oracle.
Let the oracle be an oracle that is given and a quantum state , and returns the result of and the post measurement state, where is defined in the same way as that in Definition 3.6 and be the distribution that outputs randomly generated . The oracle cannot be simulated using the simulation oracle since we need superposition of outputs of to compute . When constructing watermarking PRFs with private simulation from extraction-less watermarking PRFs, the underlying extraction-less watermarking PRF scheme needs to satisfy SIM-MDD security with private simulation under the oracle that we call QSIM-MDD security with private simulation. The reason is as follows. In the security analysis of the construction, the indistinguishability guarantee provided by SIM-MDD security needs to hold for an adversary against the resulting watermarking scheme who can access the extraction oracle. This means that it also needs to hold for an adversary who can access the oracle since is repeatedly invoked in the extraction algorithm of the resulting scheme.
Fortunately, as we will see, we can generically convert an extraction-less watermarking PRF scheme satisfying SIM-MDD security with private simulation into one satisfying QSIM-MDD security with private simulation, using QPRFs. Thus, when realizing an extraction-less watermarking PRF scheme as an intermediate step towards privately extractable watermarking PRFs, we can concentrate on realizing one satisfying SIM-MDD security with private simulation.
Remark 4.4.
There is a similar issue in the traitor tracing setting. If PLBE is a secret-key based one, we need a counterpart of QSIM-MDD in secret-key based PLBE to achieve traitor tracing with a secret tracing algorithm against quantum adversaries by using Zhandry’s framework [Zha20]. Note that Zhandry focuses on public-key based PLBE in his work [Zha20].
Definition 4.5 (QSIM-MDD Security with Private Simulation).
Let be a distribution defined as follows.
- :
-
Output .
Then, we define the game in the same way as except that in addition to , can access to the following oracle in the step 2.
- :
-
On input and a quantum state , it returns the result of and the post measurement state, where is defined in the same way as that in Definition 3.6.
We say that is QSIM-MDD secure with private simulation if for every and QPT , we have
(26) |
We have the following theorem.
Theorem 4.6.
Assume there exists an extraction-less watermarking PRF scheme satisfying SIM-MDD security with private simulation and a QPRF. Then, there exists an extraction-less watermarking PRF scheme satisfying QSIM-MDD security with private simulation.
We prove this theorem in Appendix A.
Definition 4.7 (SIM-MDD Security with Public Simulation).
We define the game in the same way as except the following differences, where .
-
•
In item 1, is given together with .
-
•
Item 2 is removed.
We say that satisfies SIM-MDD security with public simulation if for every and QPT , we have
(27) |
5 Watermarking PRF from Extraction-Less Watermarking PRF
We show how to construct watermarking PRF secure against quantum adversaries from extraction-less watermarking PRF.
Let be an extraction-less watermarking PRF scheme whose message space is . We construct a watermarking PRF scheme whose message space is as follows. We use , , and as , , and , respectively. Thus, the domain and range of are the same as those of . Also, we construct and as follows.
- :
-
-
•
Output .
-
•
- :
-
-
•
Let and .
-
•
Parse .
-
•
Let be defined in the same way as that in Definition 3.6 and be the following distribution for every .
- :
-
Output .
-
•
Compute . If , return . Otherwise, letting be the post-measurement state, go to the next step.
-
•
For all , do the following.
-
1.
Compute . Let be the post-measurement state.
-
2.
If , set . If , set . Otherwise, exit the loop and output .
-
1.
-
•
Output .
-
•
We have the following theorems.
Theorem 5.1.
If satisfies extended weak pseudorandomness against authority, then so does .
Theorem 5.2.
If is an extraction-less watermarking PRF that satisfies QSIM-MDD security, is a privately extractable watermarking PRF.
Theorem 5.3.
If is an extraction-less watermarking PRF that satisfies SIM-MDD security with public simulation, is a publicly extractable watermarking PRF.
It is clear that Theorem 5.1 holds since the evaluation algorithm of is the same as that of and extended weak pseudorandomness is insensitive to how the marking and extraction algorithms are defined. Thus, we omit a formal proof.
The proofs of Theorems 5.2 and 5.3 are almost the same. Thus, we only provide the proof for the former, and omit the proof for the latter.
Proof of Theorem 5.2.
Let . Let be a QPT adversary attacking the unremovability of . The description of is as follows.
-
1.
The challenger generates and gives to the adversary . sends to the challenger. The challenger generates , computes , and sends to .
-
2.
can access to the following oracle.
- :
-
On input and a quantum circuit , it returns .
-
3.
Finally, the adversary outputs a quantum circuit .
We define , , , and the three events , , and in the same way as Definition 3.6.
The proof of .
outputs if and only if , that is we have . Let the probability obtained by applying to . Then, we have . Let be the outcome obtained if we apply to . From the property of , we have
(28) |
and are computationally indistinguishable from the QSIM-MDD security of since outputs of is indistinguishable from those of if . This indistinguishability holds even under the existence of . Then, from Theorem 2.11, we have
(29) |
By combining the above two equations, we obtain .
The reason and need to be computationally indistinguishable under the existence of to apply Theorem 2.11 is as follows. In this application of Theorem 2.11, the quantum state appeared in the statement of it is set as contained in the quantum circuit output by . Then, Theorem 2.11 (implicitly) requires that and be indistinguishable for distinguishers who can construct . To construct , we need to execute who can access to in which is repeatedly executed. This is the reason and need to be indistinguishable under the existence of .
The proof of .
We define the event as follows for every .
- :
-
When Running , the following conditions hold.
-
•
holds.
-
•
holds for every .
-
•
exits the -th loop or holds.
-
•
Then, we have . Below, we estimate .
We first consider the case of and . Assume holds. Then, we have . Let . From, the almost-projective property of , we have
(30) |
When and , and are computationally indistinguishable since both of them are computationally indistinguishable from by the QSIM-MDD security of . This indistinguishability holds under the existence of . Thus, from Theorem 2.11, we have
(31) |
This means that in this case. Note that the reason the indistinguishability of and needs to hold under is that Theorem 2.11 requires it hold for distinguishers who can construct .
Next, we consider the case of and . Assume holds. Then, we have . We then define an additional distribution as follows.
- :
-
Generate . Output .
That is, the first bit of the output is flipped from . Then, for any random coin , we have . This is because we have for any tuple . Therefore, is exactly the same process as . Let . From, the reverse-almost-projective property of , we have
(32) |
When and , and are computationally indistinguishable since both of them are computationally indistinguishable from the following distribution by the QSIM-MDD security of .
- :
-
Generate . Output .
This indistinguishability holds under the existence of . Thus, from Theorem 2.11, we have
(33) |
This means that also in this case. Note that the reason the indistinguishability of and needs to hold under is that Theorem 2.11 requires it hold for distinguishers who can construct .
Similarly, we can prove that holds in the case of and .
Overall, we see that holds in all cases.
6 Extraction-Less Watermarking PRF from LWE
We present an extraction-less watermarking PRF, denoted by , whose message space is with domain and range . We use the following tools, which can be instantiated with the QLWE assumption (See Theorems 2.30, 2.27 and 2.24):
-
•
Private CPRF . For ease of notation, we denote CPRF evaluation circuit and constrained evaluation circuits by and , respectively, where iff .
-
•
SKE scheme . The plaintext space and ciphertext space of are and , respectively, where .
-
•
PKE scheme . The plaintext space of PKE is .
Construction overview.
We already explained the high-level idea for how to realize extraction-less watermarking PRFs in Section 1.3. However, the construction of requires some additional efforts. Thus, before providing the actual construction, we provide a high-level overview of .
Recall that letting and , we have to design and so that
-
•
If , outputs a value different from .
-
•
If , outputs .
In the token-based construction idea, we achieve these conditions by setting as an encryption of and designing as a token such that it outputs if the input is decryptable and holds for the decrypted value , and otherwise behaves as the original evaluation circuit. However, in , we use a constrained evaluation circuit of as , and thus we cannot program output values for specific inputs. Intuitively, it seems that needs to use the original PRF key to achieve the above two conditions.
To solve the issue, we adopt the idea used by Quach et al. [QWZ18]. In , the setup algorithm generates of , and sets and . Then, the PRF key generation algorithm is given , generates along with , and sets the public tag as an encryption of under . The evaluation algorithm of is simply that of .
Now, we explain how to design and to satisfy the above two conditions. Given , and , is able to extract . Then, generates and sets and . We set as a constrained version of for a circuit that outputs if the input is decryptable by and holds for decrypted value , and otherwise outputs . For an input , the constrained version of outputs the correct output if and only if . We can check that satisfies the above two conditions.
The above construction does not satisfy extended weak pseudorandomness against authority since the authority can extract the original CPRF key by . However, this problem can be fixed by constraining . We see that needs to evaluate for valid ciphertexts of . Thus, to implement the above mechanism, it is sufficient to set the public tag as an encryption of and a constrained version of for a circuit that output if and only if the input is decryptable by . Then, the authority can only extract such a constrained key. By requiring sparseness for , the constrained key cannot be used to break the pseudorandomness of for random inputs. This means that satisfies extended weak pseudorandomness against an authority. Note that we only need a single-key CPRF for since either a user or the authority (not both) is a malicious entity in security games.
The description of is as follows.
- :
-
-
•
Generate .
-
•
Output .
-
•
- :
- :
-
Recall that is a keyed CPRF evaluation circuit.
-
•
Parse .
-
•
Output .
-
•
- :
-
-
•
Parse and .
-
•
Construct a circuit described in Figure 5.
-
•
Compute , where is a set such that iff .
-
•
Output .
-
•
- :
-
-
•
Parse .
-
•
Compute .
-
•
Choose .
-
•
Compute and .
-
•
Output .
-
•
Circuit Constants: An SKE key , and a message . Input: A string . 1. Compute . 2. Output if and otherwise.
Circuit Constants: An SKE key , and a message . Input: A string . 1. Compute . 2. If , do the following (a) Parse , where and . (b) If , output . Otherwise, output . 3. Otherwise output .
The evaluation correctness of follows from the sparseness of and the correctness of . For the security of , we have the following theorems.
Theorem 6.1.
is a secure SKE scheme with pseudorandom ciphertext, is a selectively single-key private CPRF, is a CCA secure PKE scheme, then is an extraction-less watermarking PRF satisfying SIM-MDD security.
Theorem 6.2.
If is a selective single-key private CPRF, satisfies extended weak pseudorandomness.
SIM-MDD security.
First, we prove the SIM-MDD security of .
Proof of Theorem 6.1.
We define a sequence of hybrid games to prove the theorem.
- :
-
This is the same as the case in . In this game, is given and as a public tag and a marked circuit. After and are given, can access to . Finally, after finishing the access to , is given as the challenge tuple and outputs , where , , and .
- :
-
This is the same as except for the following two changes. First, is used instead of when generating the challenge tuple . Second, we change the behavior of as follows. When sends and to , if , performs the remaining procedures by using (without decrypting ).
- :
-
This is the same as except that we use instead of .
- :
-
This is the same as except that if , we use instead of .
- :
-
This is the same as except that we use a simulated instead of for the challenge marked circuit. Also, if , the challenger computes . In addition, we also change the behavior of as follows. Given and , if , answers in the same way as . Otherwise, it returns , where , , and if and otherwise.
- :
-
This is the same as except that we use instead of .
- :
-
We undo the change at .
- :
-
We undo the change at .
- :
-
We undo the change at . This is the same as the case in .
Proposition 6.3.
If , , and are correct, it holds that .
Proof of Proposition 6.3.
For the first change, holds since from the correctness of . Then, from the correctness of , we have , and thus the first change does not affect the view of . For the second change, from the correctness of , if . Then, similarly to the first change, we can see that the second change does not affect the view of .
Proposition 6.4.
If is CCA secure, it holds that .
Proof of Proposition 6.4.
We construct an algorithm that breaks CCA security of by using .
- :
-
is given from the challenger. generates , and , and sets . sends to and obtain from . then generate , sets as the challenge plaintext of the CCA game and receives from its challenger. also constructs , generates , and sends and to as the challenge public tag and marked circuit.
- :
-
When sends and to , simulates the answer by using , , and the decryption oracle .
After finishing ’s oracle access to , chooses , generates and , and sends to . Note that holds since and thus .
Finally, when terminates with output , outputs and terminates.
perfectly simulates if if , and if . This completes the proof.
Proposition 6.5.
If satisfies selective pseudorandomness, it holds that
Proof of Proposition 6.5.
We use selective single-key pseudorandomness of . We construct an algorithm that breaks the selective single-key pseudorandomness of by using .
- :
-
generates , , and , sets and sends to , and obtains from . constructs , sends to its challenger, and receives . sends and to as the challenge public tag and marked circuit.
- :
-
When sends and to , if , computes , and computes and returns the answer by using . If , returns computed as follows. chooses , and generates . finally sends to its PRF evaluation oracle and receives .
After finishing ’s oracle access to , sends computed as follows to . first chooses and generates . If , sends to its challenge oracle and receives . If , sends to its PRF evaluation oracle and receives .
Finally, when terminates with output , outputs and terminates.
perfectly simulates if the challenge oracle returns , and if it returns . Note that in these games, , and thus if , we have (). This completes the proof.
Proposition 6.6.
If satisfies selective single-key privacy, it holds that
Proof of Proposition 6.6.
We use selective single-key privacy of . We construct an algorithm that breaks the selective privacy of by using .
- :
-
generates , , and , sends to , and obtains from . constructs , sends to its challenger, and receives . sends and to as the challenge public tag and marked circuit.
- :
-
When sends and to , if , computes and returns the answer computed by using . If , returns the answer computed as follows. chooses , and generates . sends to its oracle and receives .
After finishing ’s oracle access to , sends computed as follows to . chooses and generates . If , chooses . If , sends to its oracle and receives .
Finally, when terminates with output , outputs and terminates.
perfectly simulates if and has access to , and if and has access to . This completes the proof.
Proposition 6.7.
If satisfies ciphertext pseudorandomness, it holds that
Proof of Proposition 6.7.
We construct an algorithm that breaks the ciphertext pseudorandomness of by using .
- :
-
generates and sends to , and obtains from . then generates and , and sends and to as the challenge public tag and marked circuit.
- :
-
When sends and to , if , computes and returns the answer computed by using . If , returns the answer computed as follows. chooses , sends to its encryption oracle, and receives . computes if and otherwise.
After finishing ’s oracle access to , sends computed as follows to . chooses , sends to its challenger as the challenge plaintext, and receives . generates if and otherwise.
Finally, when terminates with output , outputs and terminates.
perfectly simulates if , and if . This completes the proof.
Proposition 6.8.
If satisfies selective single-key privacy, it holds that
Proof of Proposition 6.8.
This proof is almost the same as that of Proposition 6.6.
Proposition 6.9.
If is CCA secure, it holds that .
Proof of Proposition 6.9.
This proof is almost the same as that of Proposition 6.4.
Proposition 6.10.
If , , and are correct, it holds that .
Proof of Proposition 6.10.
This proof is almost the same as that of Proposition 6.3.
By Propositions 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9 and 6.10, we complete the proof of Theorem 6.1.
Extended weak pseudorandomness.
Next, we prove the extended pseudorandomness of .
Proof of Theorem 6.2.
Let be an adversary attacking the extended weak pseudorandomness of . We construct that attacks the selective single-key pseudorandomness of .
- :
-
Given from , first generates , sends to its challenger, and obtains . sets , generates , and sends it to . answers ’s queries as follows.
- :
-
When this is invoked (no input), generates , sends it to its evaluation oracle, and obtains . Then, returns to .
- :
-
When this is invoked (no input), generates , outputs as its challenge input, and obtains . returns to . Note that this oracle is invoked only once.
When terminates with output , outputs and terminates.
Due to the sparseness of , without negligible probability, we have thus , and generated when answering to a query to is different from . Therefore, without negligible probability, is a valid adversary against the selective single-key pseudorandomness of . When is valid, we see that the advantage of is the same as that of . This completes the proof.
7 Extraction-Less Watermarking PRF with Public Simulation from IO
We construct an extraction-less watermarking PRF satisfying SIM-MDD security with public simulation. We first introduce a tool.
7.1 Puncturable Encryption, Revisited
Cohen et al. [CHN+18] introduced the notion of puncturable encryption (PE). They used a PE scheme as a crucial building block to construct a publicly extractable watermarking PRF against classical adversaries. We also use a PE scheme to construct an extraction-less watermarking PRF with public simulation (against quantum adversaries). However, we find that the original PE definition is not sufficient for proving unremovability (and our purpose) since there is a subtle issue in the security proof by Cohen et al. [CHN+18]. However, we can fix the issue since their PE scheme satisfies a stronger security notion than what they proved. Thus, we introduce a stronger security notion for PE in this section.
The syntax of PE is almost the same as that of the original PE.
Definition 7.1 (Puncturable Encryption (Syntax)).
A puncturable encryption (PE) scheme for a plaintext space is a triple of PPT algorithms and a deterministic algorithm . The ciphertext space will be where .
- :
-
The key generation algorithm takes as input the security parameter and outputs an encryption key and a decryption key .
- :
-
The puncturing algorithm takes as input and a string , and outputs a “punctured” decryption key .
- :
-
The encryption algorithm takes as input and a plaintext , and outputs a ciphertext in .
- :
-
The decryption algorithm takes a possibly punctured decryption key and a string . It outputs a plaintext or the special symbol .
There are four security requirements on PE. Three of those are the same as those in the original PE security. The difference is ciphertext pseudorandomness.
Definition 7.2 (Puncturable Encryption Security).
A PE scheme with plaintext space and ciphertext space is required to satisfy the following properties.
- Correctness:
-
We require that for all plaintext and , it holds that .
- Punctured Correctness:
-
We require the same to hold for punctured keys. For all possible keys , all string , all punctured keys , and all potential ciphertexts :
- Sparseness:
-
We also require that most strings are not valid ciphertexts:
(34) - Ciphertext Pseudorandomness:
-
We require that PE has strong ciphertext pseudorandomness defined in Definition 7.3.
Definition 7.3 (Strong Ciphertext Pseudorandomness).
We define the following experiment .
-
1.
sends a message to the challenger.
-
2.
The challenger does the following:
-
•
Generate
-
•
Compute a ciphertext .
-
•
Choose .
-
•
Choose and set and .
-
•
Generate a punctured key
-
•
Send to :
-
•
-
3.
outputs and the experiment outputs if ; otherwise .
We say that has strong ciphertext pseudorandomness if for every QPT adversary , it holds that
Remark 7.4 (Difference from the original PE).
In the original PE definition, takes two strings and outputs a punctured decryption key and punctured correctness is accordingly defined.
In the original ciphertext pseudorandomness (described in Section B.4), a punctured decryption key is punctured at both and . That is, the information about remains in for . This is an issue for our purpose (and the proof by Cohen et al. [CHN+18]). Thus, we introduce the strong ciphertext pseudorandomness, where the information about disappears in the case since the punctured decryption key is when .
In fact, the PE scheme by Cohen et al. [CHN+18] satisfies strong ciphertext pseudorandomness (and thus, we can also fix the issue in the proof by Cohen et al.121212See Section B.4 for the detail of the issue.).
Theorem 7.5.
If there exists secure IO for circuits and the QLWE assumption holds, there exists secure PE that satisfies strong ciphertext pseudorandomness.
We prove this theorem in Appendix B.
7.2 Construction of Extraction-less Watermarking PRF with Public Simulation
We describe our extraction-less watermarking PRF for message space with domain and range below. We use the following tools:
-
•
PPRF . We denote a PRF evaluation circuit by , a PRF evaluation circuit with punctured key by (that is, we omit and simply write instead of ) for ease of notations.
-
•
PE scheme . The plaintext and ciphertext space of PE are and , respectively, where and ().
-
•
Indistinguishability obfuscator .
-
•
PRG .
- :
-
-
•
Output .
-
•
- :
-
-
•
Parse .
-
•
Compute .
-
•
Generate .
-
•
Output and .
-
•
- :
-
-
•
Parse .
-
•
Compute and output .
-
•
- :
-
-
•
Parse and .
-
•
Construct a circuit described in Figure 6.
-
•
Compute and output .
-
•
- :
-
-
•
Parse and .
-
•
Choose and .
-
•
Compute .
-
•
Compute .
-
•
Output
-
•
The size of the circuit is appropriately padded to be the maximum size of all modified circuits, which will appear in the security proof.
Circuit Constants: A PRF , a PE decryption key , and a message . Input: A string . 1. Compute . 2. If , do the following (a) Parse , where , , and . (b) If , output . Otherwise, output . 3. Otherwise, output .
The evaluation correctness of immediately follows from the sparseness of and the functionality of .131313In fact, satisfies a stronger evaluation correctness than one written in Definition 4.1. The evaluation correctness holds even for any PRF key and input like the statistical correctness by Cohen et al. [CHN+18]. trivially satisfies pseudorandomness (against an authority) since outputs nothing, is a public key , and is independent of ( is not used in ). Moreover, we have the following theorem.
Theorem 7.6.
If is a secure PPRF, is a secure PRG, is a secure PE with strong ciphertext pseudorandomness, and is a secure IO, then is an extraction-less watermarking PRF satisfying SIM-MDD security with public simulation.
Proof of Theorem 7.6.
We define a sequence of hybrid games. We sometimes omit hard-coded values when we write some circuits. For example, we simply write instead of when hard-coded values are not important in arguments or clear from the context.
- :
-
This is the same as the case in . In this game, is given and as a public tag and a marked circuit, where , is the target PRF key, and is the target message from . Also, is given as the challenge tuple, where , , , and .
- Case :
-
We consider two cases separately hereafter. First, we consider the case where . We denote these hybrid games by . Note that we can choose at any time and hard-code it into in the proof since it is a uniformly random bit in all hybrid games.
- :
-
This is the same as except that if , we use , where is described in Figure 7 and . We use a punctured decryption key instead of . However, we do not use a punctured key for .
Circuit Constants: A PRF key , a (possibly punctured) PE decryption key , a message , a bit , and strings . Input: A string . 1. If , output . 2. Compute . 3. If , do the following (a) Parse , where , , and . (b) If , output . Otherwise, output . 4. Otherwise, output .
Figure 7: The description of (for ) - :
-
This is the same as except that we generate
-
•
,
-
•
.
That is, we replace and with and , respectively.
We also rename into (these distributions are the same).
-
•
- :
-
This is the same as except that we use instead of .
- Case :
-
Next, we consider the case where . We denote these hybrid games by .
- :
-
This is the same as except that if , we generate , where is described in Figure 8 and . We use a punctured decryption key instead of . However, we do not use a puncture key for at this point.
Circuit Constants: A (possibly punctured) PRF key , a (possibly punctured) PE decryption key , a message , a bit , and strings . Input: A string . 1. If , output . 2. Compute . 3. If , do the following (a) Parse , where , , and . (b) If , output . Otherwise, output . 4. Otherwise, output .
Figure 8: The description of (for ) - :
-
This is the same as except that
-
•
,
-
•
.
That is, we replace and with and , respectively. We also rename into (these distributions are the same).
-
•
- :
-
This is the same as except that we use instead of .
- :
-
This is the same as except that we use instead of .
- :
-
This is the same as except that we use instead of .
- :
-
This is the same as except that we use instead of .
- End of case analysis:
-
The two case analyses end. Remaining transitions are the reverse of transitions from to or .
- and :
The each last hybrid is the same as the case in . That is, is given and . Recall that and
-
•
if (see ),
-
•
if (see ).
security | ||||
N/A | IO & PE p-Cor. | |||
N/A | IO & PE p-Cor. |
(/) | (/) | security | ||||
---|---|---|---|---|---|---|
S-CPR | ||||||
PRG | ||||||
IO & PE p-Cor. |
(/) | (/) | security | |||
---|---|---|---|---|---|
S-CPR | |||||
PRG | |||||
IO & PPRF p-Cor. | |||||
PPRF | |||||
IO & PPRF p-Cor. | |||||
IO & PE p-Cor. |
We prove the lemma by proving the following propositions.
The case .
We first prove propositions for the case .
Proposition 7.7.
If is a secure IO and satisfies punctured correctness, it holds that
Proof of Proposition 7.7.
The difference between the two games is that is used for instead of in the case where . These two circuits are the same except that
-
•
for input , directly outputs ,
due to the punctured correctness of . Thus, if the following hold, and are functionally equivalent:
-
•
outputs when .
This holds since and runs the item (b) in Figure 6, but does not hold in this case.
Thus, and are functionally equivalent and the proposition holds due to IO security.
Proposition 7.8.
If satisfies strong ciphertext pseudorandomness, it holds that
Proof of Proposition 7.8.
We construct an algorithm for strong ciphertext pseudorandomness by using . generates , and chooses and . sends to the challenger. The challenger returns to .
Then, passes and to . also computes .
- Challenge:
-
When sends a challenge query , does the following
-
•
Construct as described in Figure 7.
-
•
Return and to .
-
•
After finishing ’s challenge query, computes and sends to . Finally, when terminates with output , outputs and terminates. perfectly simulates
-
•
if ,
-
•
if .
Thus, we see that the proposition holds.
Proposition 7.9.
If is a secure PRG, it holds that .
Proof of Proposition 7.9.
The difference between the two games is that in the target triple is or random in the case where . Recall that we rename to . Note that we randomly choose and use and in these games. Thus, we can apply pseudorandomness of since the value is never used anywhere else.
The case where .
Next, we prove propositions for the case where .
Proposition 7.10.
If is a secure IO and satisfies punctured correctness, it holds that
Proof of Proposition 7.10.
The difference between the two games is that is used for the challenge query instead of in the case where . These two circuits are the same except that
-
•
for input , directly outputs the hard-wired value ,
due to the punctured correctness of . Thus, if the following hold, and are functionally equivalent:
-
•
outputs when ,
This holds since , runs the item (b) in Figure 6, and holds in this case. Thus, and are functionally equivalent and the proposition holds due to IO security.
Proposition 7.11.
If satisfies strong pseudorandom ciphertext, it holds that
Proof of Proposition 7.11.
We construct an algorithm for strong ciphertext pseudorandomness by using a distinguisher . generates and chooses and . sends to the challenger. The challenger returns to .
Then, passes and to . also computes .
- Challenge:
-
When sends a challenge query , does the following
-
•
Construct where as described in Figure 8.
-
•
Return and to .
-
•
After finishing ’s challenge, sends to . Finally, when terminates with output , outputs and terminates.
perfectly simulates
-
•
if ,
-
•
if .
Thus, we see that the proposition holds.
Proposition 7.12.
If is a secure PRG, it holds that .
Proof of Proposition 7.12.
The difference between the two games is that in the target triple is or random in the case where . Recall that we rename to . Note that we randomly choose and use and in these games. Thus, we can apply pseudorandomness of since the value is never used anywhere else.
Proposition 7.13.
If is a secure IO and satisfies punctured correctness, it holds that
Proof of Proposition 7.13.
The difference between the two games is that is used for the challenge query instead of in the case where . These two circuits are the same except that we use instead of . Those two circuits above are functionally equivalent since is functionally equivalent to except for and both and directly outputs by the description of . Note that does not have any “if branch” condition that uses or .
Thus, the proposition holds due to IO security.
Proposition 7.14.
If satisfies punctured pseudorandomness, it holds that
Proof of Proposition 7.14.
We construct an algorithm that breaks the pseudorandomness at punctured points of by using .
generates , chooses and , sends as the challenge to its challenger of , and receives and . Here does not rely on , so we can generate before is fixed. sends and to . also computes .
- Challenge:
-
For query , can simulate the target marked circuit by using , , , and the public tag .
After finishing ’s challenge query, sends to . Finally, when terminates with output , outputs and terminates.
perfectly simulates
-
•
if ,
-
•
if .
The punctured pseudorandomness of immediately implies this proposition.
Proposition 7.15.
If is a secure IO and satisfies punctured correctness, it holds that
Proof of Proposition 7.15.
The difference between the two games is that is used for the challegne query instead of in the case where . These two circuits are the same except that we use instead of . This proof is the same as that of Proposition 7.14 (in a reverse manner). Thus, we omit it.
End of case analyses.
We complete the two case analyses.
Proposition 7.16.
If is a secure IO, satisfies punctured correctness, and satisfies punctured correctness, it holds that and .
Proof.
This proof is the same as that of Proposition 7.7 and Proposition 7.10, respectively (in a reverse manner). Thus, we omit them.
We complete the proof of Theorem 7.6.
8 Putting Pieces Altogether
Privately extractable watermarking PRF.
We summarize how to obtain our privately extractable watermarking PRF.
By Theorems 6.1, 6.2, 2.27, 2.30 and 2.24, we obtain an extraction-less watermarking with private simulation from the QLWE assumption. By combining this with Theorems 4.6 and 5.2, we obtain the following theorem.
Theorem 8.1.
If the QLWE assumption holds, there exists a privately extractable watermarking PRF.
Publicly extractable watermarking PRF.
We summarize how to obtain our publicly extractable watermarking PRF.
By Theorems 7.6, 7.5, 2.21 and 2.17, we obtain an extraction-less watermarking with public simulation from IO and the QLWE assumption since OWFs can be instantiated with the QLWE assumption. By combining this with Theorem 5.3, we obtain a publicly extractable watermarking PRF from IO and the QLWE assumption. Thus, we obtain the following theorem.
Theorem 8.2.
If there exists a secure IO and the QLWE assumption holds, there exists a publicly extractable watermarking PRF.
References
- [AHU19] Andris Ambainis, Mike Hamburg, and Dominique Unruh. Quantum security proofs using semi-classical oracles. In Alexandra Boldyreva and Daniele Micciancio, editors, CRYPTO 2019, Part II, volume 11693 of LNCS, pages 269–295. Springer, Heidelberg, August 2019.
- [AKPW13] Joël Alwen, Stephan Krenn, Krzysztof Pietrzak, and Daniel Wichs. Learning with rounding, revisited - new reduction, properties and applications. In Ran Canetti and Juan A. Garay, editors, CRYPTO 2013, Part I, volume 8042 of LNCS, pages 57–74. Springer, Heidelberg, August 2013.
- [AL21] Prabhanjan Ananth and Rolando L. La Placa. Secure software leasing. In Anne Canteaut and François-Xavier Standaert, editors, EUROCRYPT 2021, Part II, volume 12697 of LNCS, pages 501–530. Springer, Heidelberg, October 2021.
- [ALL+21] Scott Aaronson, Jiahui Liu, Qipeng Liu, Mark Zhandry, and Ruizhe Zhang. New approaches for quantum copy-protection. In Tal Malkin and Chris Peikert, editors, CRYPTO 2021, Part I, volume 12825 of LNCS, pages 526–555, Virtual Event, August 2021. Springer, Heidelberg.
- [AP20] Shweta Agrawal and Alice Pellet-Mary. Indistinguishability obfuscation without maps: Attacks and fixes for noisy linear FE. In Anne Canteaut and Yuval Ishai, editors, EUROCRYPT 2020, Part I, volume 12105 of LNCS, pages 110–140. Springer, Heidelberg, May 2020.
- [ARU14] Andris Ambainis, Ansis Rosmanis, and Dominique Unruh. Quantum attacks on classical proof systems: The hardness of quantum rewinding. In 55th FOCS, pages 474–483. IEEE Computer Society Press, October 2014.
- [BDF+11] Dan Boneh, Özgür Dagdelen, Marc Fischlin, Anja Lehmann, Christian Schaffner, and Mark Zhandry. Random oracles in a quantum world. In Dong Hoon Lee and Xiaoyun Wang, editors, ASIACRYPT 2011, volume 7073 of LNCS, pages 41–69. Springer, Heidelberg, December 2011.
- [BGI+12] Boaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, Salil P. Vadhan, and Ke Yang. On the (im)possibility of obfuscating programs. Journal of the ACM, 59(2):6:1–6:48, 2012.
- [BGI14] Elette Boyle, Shafi Goldwasser, and Ioana Ivan. Functional signatures and pseudorandom functions. In Hugo Krawczyk, editor, PKC 2014, volume 8383 of LNCS, pages 501–519. Springer, Heidelberg, March 2014.
- [BGMZ18] James Bartusek, Jiaxin Guan, Fermi Ma, and Mark Zhandry. Return of GGH15: Provable security against zeroizing attacks. In Amos Beimel and Stefan Dziembowski, editors, TCC 2018, Part II, volume 11240 of LNCS, pages 544–574. Springer, Heidelberg, November 2018.
- [BHH+19] Nina Bindel, Mike Hamburg, Kathrin Hövelmanns, Andreas Hülsing, and Edoardo Persichetti. Tighter proofs of CCA security in the quantum random oracle model. In Dennis Hofheinz and Alon Rosen, editors, TCC 2019, Part II, volume 11892 of LNCS, pages 61–90. Springer, Heidelberg, December 2019.
- [BLW17] Dan Boneh, Kevin Lewi, and David J. Wu. Constraining pseudorandom functions privately. In Serge Fehr, editor, PKC 2017, Part II, volume 10175 of LNCS, pages 494–524. Springer, Heidelberg, March 2017.
- [BSW06] Dan Boneh, Amit Sahai, and Brent Waters. Fully collusion resistant traitor tracing with short ciphertexts and private keys. In Serge Vaudenay, editor, EUROCRYPT 2006, volume 4004 of LNCS, pages 573–592. Springer, Heidelberg, May / June 2006.
- [BTVW17] Zvika Brakerski, Rotem Tsabary, Vinod Vaikuntanathan, and Hoeteck Wee. Private constrained PRFs (and more) from LWE. In Yael Kalai and Leonid Reyzin, editors, TCC 2017, Part I, volume 10677 of LNCS, pages 264–302. Springer, Heidelberg, November 2017.
- [BW13] Dan Boneh and Brent Waters. Constrained pseudorandom functions and their applications. In Kazue Sako and Palash Sarkar, editors, ASIACRYPT 2013, Part II, volume 8270 of LNCS, pages 280–300. Springer, Heidelberg, December 2013.
- [CC17] Ran Canetti and Yilei Chen. Constraint-hiding constrained PRFs for NC1 from LWE. In Jean-Sébastien Coron and Jesper Buus Nielsen, editors, EUROCRYPT 2017, Part I, volume 10210 of LNCS, pages 446–476. Springer, Heidelberg, April / May 2017.
- [CFN94] Benny Chor, Amos Fiat, and Moni Naor. Tracing traitors. In Yvo Desmedt, editor, CRYPTO’94, volume 839 of LNCS, pages 257–270. Springer, Heidelberg, August 1994.
- [CHN+18] Aloni Cohen, Justin Holmgren, Ryo Nishimaki, Vinod Vaikuntanathan, and Daniel Wichs. Watermarking cryptographic capabilities. SIAM Journal on Computing, 47(6):2157–2202, 2018.
- [CHVW19] Yilei Chen, Minki Hhan, Vinod Vaikuntanathan, and Hoeteck Wee. Matrix PRFs: Constructions, attacks, and applications to obfuscation. In Dennis Hofheinz and Alon Rosen, editors, TCC 2019, Part I, volume 11891 of LNCS, pages 55–80. Springer, Heidelberg, December 2019.
- [CMSZ21] Alessandro Chiesa, Fermi Ma, Nicholas Spooner, and Mark Zhandry. Post-quantum succinct arguments: Breaking the quantum rewinding barrier. In Nisheeth Vishnoi, editor, FOCS 2021 (to appear). IEEE, 2021.
- [DQV+21] Lalita Devadas, Willy Quach, Vinod Vaikuntanathan, Hoeteck Wee, and Daniel Wichs. Succinct lwe sasmpling, random polynomials and obfuscation. In Kobbi Nissim and Brent Waters, editors, TCC 2021, LNCS. Springer, 2021.
- [GGM86] Oded Goldreich, Shafi Goldwasser, and Silvio Micali. How to construct random functions. Journal of the ACM, 33(4):792–807, 1986.
- [GKM+19] Rishab Goyal, Sam Kim, Nathan Manohar, Brent Waters, and David J. Wu. Watermarking public-key cryptographic primitives. In Alexandra Boldyreva and Daniele Micciancio, editors, CRYPTO 2019, Part III, volume 11694 of LNCS, pages 367–398. Springer, Heidelberg, August 2019.
- [GKW19] Rishab Goyal, Venkata Koppula, and Brent Waters. New approaches to traitor tracing with embedded identities. In Dennis Hofheinz and Alon Rosen, editors, TCC 2019, Part II, volume 11892 of LNCS, pages 149–179. Springer, Heidelberg, December 2019.
- [GKWW21] Rishab Goyal, Sam Kim, Brent Waters, and David J. Wu. Beyond software watermarking: Traitor-tracing for pseudorandom functions. In Mehdi Tibouchi and Huaxiong Wang, editors, Asiacrypt 2021 (to appear), Lecture Notes in Computer Science. Springer, 2021.
- [GP21] Romain Gay and Rafael Pass. Indistinguishability obfuscation from circular security. In Samir Khuller and Virginia Vassilevska Williams, editors, STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 736–749. ACM, 2021.
- [HILL99] Johan Håstad, Russell Impagliazzo, Leonid A. Levin, and Michael Luby. A pseudorandom generator from any one-way function. SIAM Journal on Computing, 28(4):1364–1396, 1999.
- [HJL21] Samuel B. Hopkins, Aayush Jain, and Huijia Lin. Counterexamples to new circular security assumptions underlying iO. In Tal Malkin and Chris Peikert, editors, CRYPTO 2021, Part II, volume 12826 of LNCS, pages 673–700, Virtual Event, August 2021. Springer, Heidelberg.
- [HMW07] Nicholas Hopper, David Molnar, and David Wagner. From weak to strong watermarking. In Salil P. Vadhan, editor, TCC 2007, volume 4392 of LNCS, pages 362–382. Springer, Heidelberg, February 2007.
- [Jor75] Camille Jordan. Essai sur la géométrie à dimensions. Bulletin de la Société Mathématique de France, 3:103–174, 1875.
- [KNY21] Fuyuki Kitagawa, Ryo Nishimaki, and Takashi Yamakawa. Secure software leasing from standard assumptions. In Kobbi Nissim and Brent Waters, editors, TCC 2021, LNCS. Springer, 2021.
- [KPTZ13] Aggelos Kiayias, Stavros Papadopoulos, Nikos Triandopoulos, and Thomas Zacharias. Delegatable pseudorandom functions and applications. In Ahmad-Reza Sadeghi, Virgil D. Gligor, and Moti Yung, editors, ACM CCS 2013, pages 669–684. ACM Press, November 2013.
- [KW19] Sam Kim and David J. Wu. Watermarking PRFs from lattices: Stronger security via extractable PRFs. In Alexandra Boldyreva and Daniele Micciancio, editors, CRYPTO 2019, Part III, volume 11694 of LNCS, pages 335–366. Springer, Heidelberg, August 2019.
- [KW21] Sam Kim and David J. Wu. Watermarking cryptographic functionalities from standard lattice assumptions. J. Cryptol., 34(3):28, 2021.
- [MW05] Chris Marriott and John Watrous. Quantum arthur-merlin games. Comput. Complex., 14(2):122–152, 2005.
- [Nao91] Moni Naor. Bit commitment using pseudorandomness. Journal of Cryptology, 4(2):151–158, January 1991.
- [Nis13] Ryo Nishimaki. How to watermark cryptographic functions. In Thomas Johansson and Phong Q. Nguyen, editors, EUROCRYPT 2013, volume 7881 of LNCS, pages 111–125. Springer, Heidelberg, May 2013.
- [Nis19] Ryo Nishimaki. How to watermark cryptographic functions by bilinear maps. IEICE Transactions, 102-A(1):99–113, 2019.
- [Nis20] Ryo Nishimaki. Equipping public-key cryptographic primitives with watermarking (or: A hole is to watermark). In Rafael Pass and Krzysztof Pietrzak, editors, TCC 2020, Part I, volume 12550 of LNCS, pages 179–209. Springer, Heidelberg, November 2020.
- [NSS99] David Naccache, Adi Shamir, and Julien P. Stern. How to copyright a function? In Hideki Imai and Yuliang Zheng, editors, PKC’99, volume 1560 of LNCS, pages 188–196. Springer, Heidelberg, March 1999.
- [NWZ16] Ryo Nishimaki, Daniel Wichs, and Mark Zhandry. Anonymous traitor tracing: How to embed arbitrary information in a key. In Marc Fischlin and Jean-Sébastien Coron, editors, EUROCRYPT 2016, Part II, volume 9666 of LNCS, pages 388–419. Springer, Heidelberg, May 2016.
- [Pei09] Chris Peikert. Public-key cryptosystems from the worst-case shortest vector problem: extended abstract. In Michael Mitzenmacher, editor, 41st ACM STOC, pages 333–342. ACM Press, May / June 2009.
- [PS18] Chris Peikert and Sina Shiehian. Privately constraining and programming PRFs, the LWE way. In Michel Abdalla and Ricardo Dahab, editors, PKC 2018, Part II, volume 10770 of LNCS, pages 675–701. Springer, Heidelberg, March 2018.
- [PW11] Chris Peikert and Brent Waters. Lossy trapdoor functions and their applications. SIAM Journal on Computing, 40(6):1803–1844, 2011.
- [QWZ18] Willy Quach, Daniel Wichs, and Giorgos Zirdelis. Watermarking PRFs under standard assumptions: Public marking and security with extraction queries. In Amos Beimel and Stefan Dziembowski, editors, TCC 2018, Part II, volume 11240 of LNCS, pages 669–698. Springer, Heidelberg, November 2018.
- [Reg] Oded Regev. Witness-preserveing amplification of qma (lecture notes). https://cims.nyu.edu/~regev/teaching/quantum_fall_2005/ln/qma.pdf.
- [Reg09] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. Journal of the ACM, 56(6):34:1–34:40, 2009.
- [SW21] Amit Sahai and Brent Waters. How to use indistinguishability obfuscation: Deniable encryption, and more. SIAM J. Comput., 50(3):857–908, 2021.
- [Unr12] Dominique Unruh. Quantum proofs of knowledge. In David Pointcheval and Thomas Johansson, editors, EUROCRYPT 2012, volume 7237 of LNCS, pages 135–152. Springer, Heidelberg, April 2012.
- [Wat09] John Watrous. Zero-knowledge against quantum attacks. SIAM J. Comput., 39(1):25–58, 2009.
- [WW21] Hoeteck Wee and Daniel Wichs. Candidate obfuscation via oblivious LWE sampling. In Anne Canteaut and François-Xavier Standaert, editors, EUROCRYPT 2021, Part III, volume 12698 of LNCS, pages 127–156. Springer, Heidelberg, October 2021.
- [YAL+19] Rupeng Yang, Man Ho Au, Junzuo Lai, Qiuliang Xu, and Zuoxia Yu. Collusion resistant watermarking schemes for cryptographic functionalities. In Steven D. Galbraith and Shiho Moriai, editors, ASIACRYPT 2019, Part I, volume 11921 of LNCS, pages 371–398. Springer, Heidelberg, December 2019.
- [YAYX20] Rupeng Yang, Man Ho Au, Zuoxia Yu, and Qiuliang Xu. Collusion resistant watermarkable PRFs from standard assumptions. In Daniele Micciancio and Thomas Ristenpart, editors, CRYPTO 2020, Part I, volume 12170 of LNCS, pages 590–620. Springer, Heidelberg, August 2020.
- [YF11] Maki Yoshida and Toru Fujiwara. Toward digital watermarking for cryptographic data. IEICE Transactions, 94-A(1):270–272, 2011.
- [Zha12a] Mark Zhandry. How to construct quantum random functions. In 53rd FOCS, pages 679–687. IEEE Computer Society Press, October 2012.
- [Zha12b] Mark Zhandry. Secure identity-based encryption in the quantum random oracle model. In Reihaneh Safavi-Naini and Ran Canetti, editors, CRYPTO 2012, volume 7417 of LNCS, pages 758–775. Springer, Heidelberg, August 2012.
- [Zha19] Mark Zhandry. How to record quantum queries, and applications to quantum indifferentiability. In Alexandra Boldyreva and Daniele Micciancio, editors, CRYPTO 2019, Part II, volume 11693 of LNCS, pages 239–268. Springer, Heidelberg, August 2019.
- [Zha20] Mark Zhandry. Schrödinger’s pirate: How to trace a quantum decoder. In Rafael Pass and Krzysztof Pietrzak, editors, TCC 2020, Part III, volume 12552 of LNCS, pages 61–91. Springer, Heidelberg, November 2020.
Appendix A Achieving QSIM-MDD from SIM-MDD
We prove Theorem 4.6, that is, we show that we can transform extraction-less watermarking PRF satisfying SIM-MDD security with private simulation into one satisfying QSIM-MDD security with private simulation, by using a QPRF. Before the proof, we introduce semi-classical one-way to hiding (O2H) lemma.
A.1 Semi-Classical One-Way to Hiding (O2H) Lemma
We recall a few lemmas.
Definition A.1 (Punctured oracle).
Let be any function, and be a set. The oracle (“ punctured by ”) takes as input a value . It first computes whether into an auxiliary register and measures it. Then it computes and returns the result. Let be the event that any of the measurements returns .
Lemma A.2 (Semi-classical O2H [AHU19, Theorem 1]).
Let be random functions, be a random value, and be a random set such that for every . The tuple may have arbitrary joint distribution. Furthermore, let be a quantum oracle algorithm. Let be any classical event. Then we have
(35) |
Lemma A.3 (Search in semi-classical oracle [AHU19, Theorem 2]).
Let be a random function, let be a random value, and let be a random set. may have arbitrary joint distribution. Let be a quantum oracle algorithm. If for each , (conditioned on and ), then we have
(36) |
where is the number of queries to by .
A.2 Proof
Construction.
We start with the construction. Let be an extraction-less watermarking PRF scheme satisfying SIM-MDD security with private simulation. We also let the message space of is . Let be a QPRF with domain and range , which is the randomness space of . We construct an extraction-less watermarking PRF scheme satisfying QSIM-MDD security with private simulation as follows. We use , , and as , , and , respectively. The domain and range of are the same as those of . The mark space of is . Also, we construct and as follows.
- :
-
-
•
Generate .
-
•
Genrate .
-
•
Outputs .
-
•
- :
-
-
•
Parse .
-
•
Output .
-
•
Security analysis.
Let and be any QPT adversary for QSIM-MDD security with private simulation making total queries to and . We prove that for any polynomial , it holds that . We prove it using hybrid games. Let be the event that the final output is in Game . We define a distribution as
- :
-
Output .
- Game :
-
This is . Thus, .
-
1.
The challenger generates and , and gives to . send to the challenger. The challenger generates , computes , and sends to .
-
2.
can access to the following oracles.
- :
-
On input and , it returns , where .
- :
-
On input and a quantum state , it returns the result of and the post measurement state, where .
-
3.
The challenger generates . If , the challenger samples . If , the challenger generates , where, . The challenger sends to .
-
4.
When terminates with output , the challenger outputs if and otherwise.
-
1.
- Game :
-
This game is the same as Game except that is replaced with a quantum-accessible random function .
We have from the the security of .
- Game :
-
This game is the same as Game except that is replaced with
(37) where .
We have .
- Game :
-
This game is the same as Game except the followings. When makes a query and to , is returned instead of . Also, when makes a query to , is performed instead of , where and .
By this change, is now used only for generating the challenge tuple .
- Game :
-
This game is the same as Game except that is replaced with , where and are random functions and is a polynomial of specified later.
Theorem A.4 (Small Range Distribution [Zha12a]).
For any QPT adversary making quantum queries to or , we have .
By the above theorem, we have .
We can simulate using a -wise independent function by the following theorem.
Theorem A.5 ([Zha12b]).
For any QPT adversary making quantum queries to or , we have .
We can efficiently simulate in Game using samples from since can be interpreted as a mapping for samples from . Then, from the SIM-MDD security with private simulation of , we have . From the above, we also have for some negligible function . Thus, by setting , we obtain .
Since is any polynomial, this means that .
Remark A.6.
It is easy to see that the extended weak pseudorandomness of is preserved after we apply the transformation above since the evaluation algorithm is the same as that of and extended weak pseudorandomness holds against adversaries that generate . Thus, we omit a formal proof.
Appendix B Puncturable Encryption with Strong Ciphertext Pseudorandomness
We prove Theorem 7.5 in this section.
B.1 Tools for PE
Definition B.1 (Statistically Injective PPRF).
If a PPRF family satisfies the following, we call it a statistically injective PPRF family with failure probability . With probability over the random choice of , for all , if , then . If is not specified, it is a negligile function.
Sahai and Waters show that we can convert any PPRF into a statistically injective PPRF [SW21].
Theorem B.2 ([SW21]).
If OWFs exist, then for all efficiently computable functions , , and such that , there exists a statistically injective PPRF family with failure probability that maps bits to bits.
Definition B.3.
An injective bit-commitment with setup consists of PPT algorithms .
- :
-
The key generation algorithm takes as input the security parameter and outputs a commitment key .
- :
-
The commitment algorithm takes as input and a bit and outputs a commitment .
These satisfy the following properties.
- Computationally Hiding:
-
For any QPT , it holds that
- Statistically Binding:
-
It holds that
(41) - Injective:
-
For every security parameter , there is a bound on the number of random bits used by such that if , is an injective function on except negligible probability.
Theorem B.4.
If the QLWE assumption holds, there exists a secure injective bit-commitment with setup.
This theorem follows from the following theorems.
Theorem B.5 ([Nao91]).
If there exists (injective) OWFs, there exists (injective) bit-commitment.
Theorem B.6 ([PW11, AKPW13, Adapted]).
If the QLWE assumption holds, there exists a secure injective OWF with evaluation key generation algorithms.
Remark B.7.
The injective OWFs achieved in Theorem B.6 needs evaluation key generation algorithms unlike the standard definition of OWFs. However, OWFs with evaluation key generation algorithms are sufficient for proving Theorem B.4 by using Theorem B.5 since we use commitment key generation algorithm (i.e., setup) in Definition B.3. Note that there is no post-quantum secure injective OWF without evaluation key generation algorithm so far.
B.2 PE Scheme Description
We review the puncturable encryption scheme by Cohen et al. [CHN+18]. We can see Theorem 7.5 holds by inspecting their PE scheme. The scheme utilizes the following ingredients and the length of ciphertexts is times the length of plaintexts:
-
•
A length-doubling
-
•
An injective PPRFs (See Definition B.1) .
-
•
A PPRF .
-
•
An injective bit-commitment with setup using randomness in . We only use this in our security proof.
Scheme.
The scheme by Cohen et al. [CHN+18] is as follows.
- ):
- :
-
Output , where is the obfuscated circuits where is described in Figure 14, that is, .
- :
-
Take , sample , and outputs .
- :
-
Take and returns .
The size of the circuits is appropriately padded to be the maximum size of all modified circuits, which will appear in the security proof.
Circuit Constants : Injective PPRF , PPRF Inputs: 1. Compute . 2. Compute . 3. Compute . 4. Output .
Circuit Constants : Injective PPRF , PPRF Inputs: , where , , and . 1. Compute . 2. If , output . 3. Else output .
Circuit Constants : Point , injective PPRF , and PPRF Inputs: , where , , and . 1. If , output . 2. Compute . 3. If , output . 4. Else output .
B.3 PE Security Proof
Cohen et al. [CHN+18] proved correctness, punctured correctness, and sparseness of above by using secure PRG , secure injective PPRF , secure PPRF , and secure IO . Thus, we complete the proof of Theorem 7.5 by combining Theorems B.2 and B.4, and Theorem B.8 below, which we prove in this section.
Theorem B.8.
If is a secure PRG, is a secure injective PPRF, is a secure PPRF, is a secure injective bit-commitment with setup, and is a secure IO, then is a secure PE that satisifes strong ciphertext pseudorandomness.
Proof of Theorem B.8.
To prove is indistinguishable from , we define a sequence hybrid games.
- :
-
This is the same as the real game with . That is, for queried the challenger does the following.
-
1.
Choose an injective PPRF and PPRF .
-
2.
Choose and compute , , and .
-
3.
Set and computes and .
-
4.
Send to the adversary.
-
1.
- :
-
This is the same as except that is uniformly random.
- :
-
This is the same as except that we use punctured and modified circuits and described in Figures 16 and 17. Intuitively, these modified circuits are punctured at input and use exceptional handling for this input.
- :
-
This is the same as except that .
- :
-
This is the same as except that we use punctured and modified circuits and described in Figures 18 and 19. Intuitively, these modified circuits are punctured at input and use and exceptional handling for .
- :
-
This is the same as except that is uniformly random. Now, , , are uniformly random and we rewrite them into , , , respectively. For ease of notation, we also denote this game by .
- :
- :
-
This is the same as the real game with . That is, for queried the challenger does the following.
-
1.
Choose an injective PPRF and PPRF .
-
2.
Choose , , and .
-
3.
Set and computes and .
-
4.
Send to the adversary.
-
1.
We described the overview of these hybrid games in Figure 15.
If we prove these hybrid games are indistinguishable, we complete the proof of Theorem B.8.
We prove that those hybrid games in Figure 15 are indistinguishable by Lemmata B.9, B.10, B.14, B.15, B.19, B.20 and B.25.
From to .
We first move from to .
Lemma B.9.
If is a secure PRG, it holds that .
Proof of Lemma B.9.
The randomness for encryption is never used anywhere except . We can apply the PRG security and immediately obtain the lemma.
Lemma B.10.
If is a secure IO and is a secure injective PPRF, it holds that
Proof of Lemma B.10.
We change and into and , respectively.
Circuit Constants : Injective PPRF , PPRF Inputs: 1. Compute . 2. Compute . 3. Compute . 4. Output .
Circuit Constants : Point , injective PPRF , PPRF , . Inputs: , where , , and . 1. If , output . 2. Compute . 3. If , output . 4. If , output . 5. Else output .
We define a sequence of sub-hybrid games.
Proposition B.11.
If is a secure IO, it holds that .
Proof of Proposition B.11.
In these games, value is not in the image of except with negligible probability. The only difference between the two games is that is used in . Thus, and are functionally equivalent except with negligible probability. We can obtain the proposition by applying the IO security.
Proposition B.12.
If is a secure IO and is injective, it holds that .
Proof of Proposition B.12.
We analyze the case where since it is the only difference between and .
-
•
If , outputs by the first line of the description. Thus, the output of is the same as that of .
-
•
If , it holds in this case. However, it should be due to the injectivity of and . Thus, both and output in this case ( outputs at the first line).
Therefore, and are functionally equivalent. We can obtain the proposition by applying the IO security.
Proposition B.13.
If is a secure IO, it holds that .
Proof of Proposition B.13.
Due to the exceptional handling in the third item of , is never computed for input . Thus, even if we use instead of , and are functionally equivalent. We can obtain the proposition by the IO security.
We complete the proof of Lemma B.10.
Lemma B.14.
If is a secure injective PPRF, it holds that .
Proof of Lemma B.14.
The difference between these two games is that is or random. We can immediately obtain the lemma by applying punctured pseudorandomness of since we use in these games.
Lemma B.15.
If is a secure IO and is a secure injective PPRF, it holds that
Proof of Lemma B.15.
We change and into and , respectively.
Circuit Constants : Injective PPRF , PPRF Inputs: 1. Compute . 2. Compute . 3. Compute . 4. Output .
Circuit Constants : Point , injective PPRF , PPRF , . Inputs: , where , , and . 1. If , output . 2. Compute . 3. If , output . 4. If , output . 5. Else output .
We define a sequence of sub-hybrid games.
- :
-
This is the same as except that we use punctured and set .
- :
-
This is the same as except that we still use but set .
Proposition B.16.
If is a secure IO, it holds that .
Proof of Proposition B.16.
In these games is uniformly random. By the sparsity of , is not in the image of except with negligible probability. Thus, and are functionally equivalent except with negligible probability. We obtain the proposition by the IO security.
Proposition B.17.
If is a secure IO, it holds that .
Proof of Proposition B.17.
The difference between and is that we replace “If , outputs .” with “If , outputs .”. In these games, is not in the image of except with negligible probability. Recall that means . Thus, those two circuits may differ when but . However, it does not happen in this case due to the injectivity of . Thus, and are functionally equivalent and we obtain the proposition by applying the IO security.
Proposition B.18.
If is a secure IO, it holds that .
Proof of Proposition B.18.
The difference between these two games that we use instead of . However, is never computed by the first item of . We obtain the proposition by the IO security.
We complete the proof of Lemma B.15.
Lemma B.19.
If is a secure PPRF, it holds that .
Proof of Lemma B.19.
The difference between these two games is that is or random. We can immediately obtain the lemma by applying punctured pseudorandomness of since we use in these games.
In , , , and are uniformly random strings as , , and .
From to .
We leap to and move from to instead of directly moving from to since and is almost symmetric (but not perfectly symmetric).
Lemma B.20.
If is a secure IO and is a secure injective PPRF, it holds that
Proof of Lemma B.20.
We change and into and , respectively.
We define a sequence of sub-hybrid games.
- :
-
This is the same as except that we generate and set and described in Figure 16.
- :
-
This is the same as except that we set described in Figure 20. That is, we still use , but the modified circuit that outputs for input , where and .
- :
-
This is the same as except that we set described in Figure 21. That is, we still use , but the modified circuit outputs for an input such that .
Circuit Constants : Point , injective PPRF , PPRF , , , . Inputs: , where , , and . 1. If and and , output . 2. If , output . 3. Compute . 4. If , output . 5. Else output .
Circuit Constants : Point , injective PPRF , PPRF , , , . Inputs: , where , , and . 1. If and and , output . 2. If , output . 3. Compute . 4. If , output . 5. If , output . 6. Else output .
Proposition B.21.
If is a secure IO, it holds that .
Proof of Proposition B.21.
In these games, value is not in the image of except with negligible probability. Thus, and are functionally equivalent except with negligible probability. We can obtain the proposition by applying the IO security.
Proposition B.22.
If is a secure IO and is injective, it holds that .
Proof of Proposition B.22.
The difference between and is “If and and , output .”. Although is a valid encryption, is not equal to except with negligible probability since is uniformly random. Similarly, is not equal to except with negligible probability. Thus, outputs . That is, and are functionally equivalent. We can obtain the proposition by applying the IO security.
Proposition B.23.
If is a secure IO and is injective, it holds that .
Proof of Proposition B.23.
We analyze the case where . We can reach the forth line of if . If and , it holds that . However, it should be in this case due to the injectivity of . That is, if outputs at the fourth line, also outputs at the second line. Therefore, and are functionally equivalent. We can obtain the proposition by applying the IO security.
Proposition B.24.
If is a secure IO, it holds that .
Proof of Proposition B.24.
Due to the exceptional handling in the fourth line of , is never computed for input . Thus, even if we use instead of , and are functionally equivalent. We can obtain the proposition by the IO security.
We complete the proof of Lemma B.20.
Lemma B.25.
If is a secure IO, is a secure injective PPRF, and is a secure injective bit-commitment with setup, it holds that .
Proof of Lemma B.25.
We change and into and , respectively.
Circuit Constants : Point , injective PPRF , PPRF , , , , . Inputs: , where , , and . 1. If and and , output . 2. If , output . 3. Compute . 4. If , output . 5. If , output . 6. Else output .
Circuit Constants : Point , injective PPRF , PPRF , , , . Inputs: , where , , and . 1. If and False and , output . // Never triggered 2. If , output . 3. Compute . 4. If , output . 5. If , output . 6. Else output .
We define a sequence of sub-hybrid games.
- :
-
This is the same as except that we use instead of .
- :
-
This is the same as except that we use described in Figure 22, where and are hardwired, instead of .
- :
-
This is the same as except that we hard-code into instead of .
- :
-
This is the same as except that we use described in Figure 23
- :
-
This is the same as except that we use punctured and set .
- :
-
This is the same as except that we still use but set .
Proposition B.26.
If is a secure PPRF, it holds that .
Proof of Proposition B.26.
In these games, we use in and . Thus, we can apply the punctured pseudorandomness and immediately obtain the proposition.
Proposition B.27.
If is a secure IO and is injective, it holds that
Proof of Proposition B.27.
The difference between and is whether we use “” or “”, where and . Since is injective, these two conditions are equivalent. Therefore, those two circuits are functionally equivalent. We obtain the proposition by applying the IO security.
Proposition B.28.
If is computationally hiding, it holds that
Proof of Proposition B.28.
The only difference between these two games is that or . Note that is never used anywhere else. We can obtain the proposition by the hiding property of .
Proposition B.29.
If is a secure IO and is statistically binding, it holds that .
Proof of Proposition B.29.
The difference between and is that the first line of is never executed. However, is hardwired in . Thus, the first line of , in particular, condition “” is also never true except negligible probability due to the statistical binding property of . That is, these two circuits are functionally equivalent except negligible probability. We obtain the proposition by applying the IO security.
Proposition B.30.
If is a secure IO, it holds that .
Proof of Proposition B.30.
In these games is uniformly random. By the sparsity of , is not in the image of except with negligible probability. Thus, and are functionally equivalent except with negligible probability. We obtain the proposition by the IO security.
Proposition B.31.
If is a secure IO, it holds that .
Proof of Proposition B.31.
The difference between in Figure 23 and in Figure 19 is that we replace “If , outputs .” with “If , outputs .” since the first line of is never triggered. In these games, is not in the image of except with negligible probability. Recall that means . Thus, those two circuits may differ when but . However, it does not happen in this case due to the injectivity of . Thus, and are functionally equivalent and we obtain the proposition by applying the IO security.
Proposition B.32.
If is a secure IO, it holds that .
Proof of Proposition B.32.
The difference between these two games that we use instead of . However, is never computed by the first line of . We obtain the proposition by the IO security.
We complete the proof of Lemma B.25.
B.4 Original Ciphertext Pseudorandomness of PE
We describe the original ciphertext pseudorandomness of PE defined by Cohen et al. [CHN+18] in this section for reference.
Definition B.33 (Ciphertext Pseudorandomness).
We define the following experiment for PE.
-
1.
sends a message to the challenger.
-
2.
The challenger does the following:
-
•
Generate
-
•
Compute encryption .
-
•
Choose .
-
•
Generate the punctured key
-
•
Choose and sends the following to :
(42) (43)
-
•
-
3.
outputs and the experiment outputs if ; otherwise .
We say that has ciphertext pseudorandomness if for every QPT adversary , it holds that
Issue in the proof by Cohen et al.
In the watermarking PRF by Cohen et al. [CHN+18], we use to extract an embedded message. They replace with in their proof of unremovability [CHN+18, Lemma 6.7]. Then, they use PRG security [CHN+18, Lemma 6.8] to replace with a uniformly random string since the information about disappears from the PE ciphertext. However, there is a subtle issue here. The information about remains in the punctured decryption key , which is punctured both at and , since they use ciphertext pseudorandomness in Definition B.33 and need to use the punctured decryption key. Thus, we cannot apply PRG security even after we apply the ciphertext pseudorandomness in Definition B.33. This is the reason why we introduce the strong ciphertext pseudorandomness in Definition 7.3.