This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Secretive Coded Caching with Shared Caches

Shreya Shrestha Meel, and B. Sundar Rajan Manuscript received ; revised . This work was supported partly by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J. C. Bose National Fellowship to B. Sundar Rajan.The authors are with the Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore 560012, India (e-mail: {shreyameel, bsrajan}@iisc.ac.in).
Abstract

We consider the problem of secretive coded caching in a shared cache setup where the number of users accessing a particular helper cache is more than one, and every user can access exactly one helper cache. In secretive coded caching, the constraint of perfect secrecy must be satisfied. It requires that the users should not gain, either from their caches or from the transmissions, any information about the content of the files that they did not request from the server. In order to accommodate the secrecy constraint, our problem setup requires, in addition to a helper cache, a dedicated user cache of minimum capacity of 1 unit to every user. This is where our formulation differs from the original work on shared caches (“Fundamental Limits of Coded Caching With Multiple Antennas, Shared Caches and Uncoded Prefetching” by E. Parrinello, A. Ünsal and P. Elia in Trans. Inf. Theory, 2020). In this work, we propose a secretively achievable coded caching scheme with shared caches under centralized placement. When our scheme is applied to the dedicated cache setting, it matches the scheme by Ravindrakumar et al. (“Private Coded Caching”, in Trans. Inf. Forensics and Security, 2018).

Index Terms:
Coded caching, secretive coded caching, shared caches.

I Introduction

In wireless content delivery networks, caching helps in reducing the disparity in traffic load between peak and off-peak hours, by allowing end users to store parts of the library in their local caches. The seminal work by Maddah-Ali and Niesen in [1] established that congestion of the shared broadcast link can be reduced further by making coded transmissions that are useful to more than one user simultaneously. This system operates in two phases-placement phase and delivery phase. In the placement phase, the server fills the users’ caches with some fraction of all the files, without knowing their future demands. In the delivery phase, users reveal their file requests, and the server responds by sending coded multicast transmissions over the broadcast link such that, the users, from the transmissions and cache contents, can recover their requested files. The number of bits transmitted over the shared link, normalized by the file size, is known as the rate of the coded caching scheme. To reduce the load on the link, rate needs to be minimised. However, the placement and delivery as in [1] violates the secrecy constraint, as users get some information about all the files from their caches, even before they reveal their demands. In secretive coded caching, the two phases must be designed such that, users obtain no information about the files that they do not request.

Secretive coded caching problem was first studied in [2, 3] where the authors proposed an achievable coded caching scheme imposing the constraint of perfect secrecy, under both centralized and decentralized settings. Their general scheme was also secure against eavesdroppers, as in [4]. This problem was extended to the setup of Device-to-Device caching networks in [5] and to Combination networks in [6]. The authors in [7] considered secretive coded caching in a setting where at most ll out of the KK users are untrustworthy, and might collude among themselves. On setting the value of ll to unity, this scheme reduces to the original scheme in [3].

All the works cited above [2, 3, 5, 6, 7], consider a dedicated caching network, i.e., each user is privileged to access a cache, distinct from the remaining users in the network. However, to the best of our knowledge, in a shared cache setup, where multiple users are assisted by a single helper cache, secretive coded caching has not been studied so far. References [8] and [9] studied coded caching in this setting, without imposing the secrecy constraint, with uncoded and coded placement respectively. In [8], the authors formulate the worst case achievable rate (which they refer to as ‘delay’) and prove its optimality under uncoded placement through an index-coding-aided converse. They create multicasting opportunities in their delivery algorithm, by leveraging the content overlap between distinct caches, similar to that in [1]. In [9], the authors provide a scheme achieving lower worst-case delay than [8] by placing uncoded and coded sub-files in the helper caches, utilizing the knowledge of the user to cache association profile and optimising over the size of these sub-files to minimise rate.

Like any secretive coded caching problem, in our shared cache setup too, secrecy of content of files has to be preserved in two aspects: (i) caching, and (ii) transmission. To ensure no leakage of information via (i), we employ an appropriate secret-sharing scheme to encode the files before they are placed at the helper caches. By encoding the files this way, the cached contents alone reveal no information about the files to the users. For no information leakage via (ii), we securely deliver each multicast message so that, only the users who benefit from the transmission can decode it. This is done by XOR’ing each message with a unique key. This necessitates that the key used for encoding each multicast message serving a group of users is made privately available to the user caches of this group. Otherwise, any user associated to a given cache, can recover the files requested by the other users connected to the same cache, resulting in a breach of the secrecy constraint. We outline the contributions of this letter below:

  • We provide a secretively achievable coded caching scheme for a caching system with shared caches.

  • When our scheme is applied to the dedicated caches setup, we show that the secretively achievable rate-memory pair as obtained in [3] is recovered.

The rest of the paper is organized as follows. Section II sets up the problem. The proposed scheme is presented in Section III. In Section IV, we discuss the main results of the paper. Finally, we conclude the paper with Section V.

II Problem Setup

The system model is illustrated in Fig. 1. The server has a library of NN files, denoted by W[N]:={W1,W2,,WNW^{[N]}:=\{W^{1},W^{2},\dots,W^{N}}, of size BB bits each. There are KK users in the system, and the number of files in the library exceeds the number of users, i.e., NKN\geq K. The users are assisted by Λ(K)\Lambda(\leq K) helper caches, each with storage capacity of MM files. Further, user kk has a cache of normalized size Mk,k[K]M_{k},\ \forall k\in[K].

Like the system model introduced in [8], the KK users are partitioned into Λ\Lambda groups, based on the cache they have access to. This is reflected by the user-to-cache association 𝒰={𝒰1,𝒰2,,𝒰Λ}\mathcal{U}=\{\mathcal{U}_{1},\mathcal{U}_{2},\ldots,\mathcal{U}_{\Lambda}\}, where λ[Λ]={1,2,,Λ}\forall\lambda\in[\Lambda]=\{1,2,\dots,\Lambda\}, 𝒰λ\mathcal{U}_{\lambda} is the ordered set of users accessing cache λ\lambda. The jthj^{th} user in 𝒰λ\mathcal{U}_{\lambda} is denoted by 𝒰λ(j)\mathcal{U}_{\lambda}(j). The user-association profile is given by the vector 𝓛=(1,2,,Λ)\boldsymbol{\mathcal{L}}=(\mathcal{L}_{1},\mathcal{L}_{2},\ldots,\mathcal{L}_{\Lambda}), with 12,,Λ\mathcal{L}_{1}\geq\mathcal{L}_{2},\ldots,\geq\mathcal{L}_{\Lambda}, where λ\mathcal{L}_{\lambda} represents the number of users associated to the λth\lambda^{th} most populated helper cache. The most populated cache has the maximum number of users assigned to it. λ=1Λλ=K\sum_{\lambda=1}^{\Lambda}\mathcal{L}_{\lambda}=K. Without loss of generality, we assume that caches are labelled in a non-increasing order of the number of users that access them, i.e., |𝒰λ|=λλ[Λ]|\mathcal{U}_{\lambda}|=\mathcal{L}_{\lambda}\ \forall\lambda\in[\Lambda]. Therefore, cache 11 is accessed by 1\mathcal{L}_{1} number of users, cache 22 is accessed by 2\mathcal{L}_{2} number of users, and so on. If not, we can simply relabel the caches to obtain this ordering. The system in our formulation operates in four phases:

  1. a)

    Helper Cache Placement Phase: The server, after encoding each file into shares populates the helper caches with these shares without knowing the user-to-cache association.

  2. b)

    User-to-Cache Assignment Phase: This occurs after the shares have been placed. Each user is assigned to a single helper cache and 𝒰\mathcal{U} is communicated to the server.

  3. c)

    User Cache Placement Phase: Once the user-to-cache association is conveyed, the server privately places unique keys in the user caches.

  4. d)

    Delivery Phase: Each user requests for a single file from the library. In response, the server broadcasts coded multicast messages over the shared link so as to satisfy the users’ demands, and ensuring that the users gain no information about the content of the files they did not request.

To summarize, the system model considered in this paper differs from the one in [8] in the following aspects:

  • User kk has a dedicated cache of size Mk1,k[K]M_{k}\geq 1,\ \forall k\in[K] to store keys.

  • There is an additional phase for placing these keys in the user caches.

Refer to caption
Figure 1: Caching system model imposing secrecy constraint.

The content in the helper and user cache, comprising the shares and keys, that user kk has access to is denoted by 𝒵k\mathcal{Z}_{k}, k[K]\forall\ k\in[K]. In the delivery phase, each user requests for a unique file. Users convey their requests to the server in the form of a demand vector 𝐝=(d1,d2,,dK)\mathbf{d}=(d_{1},d_{2},\dots,d_{K}), which represents that user kk has requested WdkW^{d_{k}}. We assume NKN\geq K and the worst case demand vector, i.e., didj,i,j[K],ijd_{i}\neq d_{j},\ \forall\ \ i,j\in[K],i\neq j. Let, for a profile 𝓛\boldsymbol{\mathcal{L}}, X𝐝(𝓛)X_{\mathbf{d}}(\boldsymbol{\mathcal{L}}) denote the transmitted message vector corresponding to the demand 𝐝\mathbf{d}. Given this setting, the perfect secrecy constraint has to be met by all the users. By the definition in [3], for perfect secrecy, information leakage must be 0, i.e.,

I(W[N]dk;𝒵k,X𝐝(𝓛))=0,𝐝[N]K,k[K].I(W^{[N]\setminus d_{k}};\mathcal{Z}_{k},X_{\mathbf{d}}(\boldsymbol{\mathcal{L}}))=0,\ \forall\ \mathbf{d}\in[N]^{K},\forall\ k\in[K]. (1)

where I(;)I(;) is the mutual information, and W[N]dkW^{[N]\setminus d_{k}} is the set of all files except the one requested by user kk. The maximum load on the shared link of a shared cache system with profile 𝓛\boldsymbol{\mathcal{L}}, under the constraint of (1)\eqref{sec_per} is given by the effective number of files that the server broadcasts in the delivery phase to meet the users’ demands when they are all distinct. This is quantified by the number of bits transmitted by the server, normalised by file size, for worst case demand vector and is known as the secretively achievable rate Rs(𝓛)R^{s}(\boldsymbol{\mathcal{L}}), expressed in units of files.

III Proposed Scheme

Before describing the general scheme, we see an example.

Example 1: Consider a network with N=K=M=3,Mk1,k[3]N=K=M=3,M_{k}\geq 1,k\in[3], and file size of BB bits. 𝒰1={1,2}\mathcal{U}_{1}=\{1,2\}, 𝒰2={3}\mathcal{U}_{2}=\{3\}; hence 𝓛=(2,1)\boldsymbol{\mathcal{L}}=(2,1). The server selects V1,V2,V3V^{1},V^{2},V^{3} randomly and uniformly from the Galois field 𝔽2B\mathbb{F}_{2^{B}}. In the helper-cache placement phase, the server places, n[3]\forall n\in[3], S1n=WnVnS^{n}_{1}=W^{n}\oplus V^{n} in cache 1 and S2n=VnS^{n}_{2}=V^{n} in cache 2. The server also selects two random keys T{1,3}T_{\{1,3\}} and T{2}T_{\{2\}} uniformly from 𝔽2B\mathbb{F}_{2^{B}}. In the user cache placement phase, the server privately places T{1,3}T_{\{1,3\}} in user caches 1 and 3, T{2}T_{\{2\}} in user cache 2. In the delivery phase, let 𝐝=(1,2,3)\mathbf{d}=(1,2,3). The server transmits S21S13T{1,3}S^{1}_{2}\oplus S^{3}_{1}\oplus T_{\{1,3\}} and S12T{2}S^{2}_{1}\oplus T_{\{2\}}. User 1 can decode W1W^{1} but obtains no information about W2W^{2} and W3W^{3}. This holds for users 2 and 3 as well. The rate is thus 22 units.

We now describe the general coded caching scheme and derive the secretively achievable rate Rs(𝓛)R^{s}(\boldsymbol{\mathcal{L}}) for memory points MM where t=ΛMM+N{0,1,,Λ1}t=\frac{\Lambda M}{M+N}\in\{0,1,\dots,\Lambda-1\} is the caching parameter. The rate corresponding to non-integer values of tt are achievable through memory sharing. The steps involved are as follows:

III-A When t{1,2,,Λ1}t\in\{1,2,\dots,\Lambda-1\}

III-A1 File encoding

The first step is to encode each file in the library using a ((Λ1t1),(Λt))\big{(}\binom{\Lambda-1}{t-1},\binom{\Lambda}{t}\big{)} non- perfect secret sharing scheme [11]. To do this, the file Wn,n[N]W^{n},\ \forall n\in[N] is split into ((Λt)(Λ1t1))=(Λ1t)(\binom{\Lambda}{t}-\binom{\Lambda-1}{t-1})=\binom{\Lambda-1}{t} equal-sized sub-files. The size of each sub-file is Fs=B(Λ1t)F_{s}=\frac{B}{\binom{\Lambda-1}{t}}, and the sub-files of WnW^{n} is denoted by the column vector Wn=(Wpn),p[(Λ1t)]W^{n}=(W^{n}_{p}),\ p\in\big{[}\binom{\Lambda-1}{t}\big{]}. Further, (Λ1t1)\binom{\Lambda-1}{t-1} encryption keys each of length FsF_{s} are generated uniformly and independent of the files from 𝔽2Fs\mathbb{F}_{2^{F_{s}}}. They are denoted by the column vector Vn=(Vqn),q[(Λ1t1)]V^{n}=(V^{n}_{q}),\ q\in\big{[}\binom{\Lambda-1}{t-1}\big{]}. The idea is to concatenate VnV^{n}, below WnW^{n} and multiply the resulting (Λt)\binom{\Lambda}{t}-length vector, [Wn;Vn][W^{n};V^{n}] with a Cauchy matrix, as presented in [7]. The share vector Sn=(Srn),r[(Λt)]S^{n}=(S^{n}_{r}),r\in\big{[}\binom{\Lambda}{t}\big{]} is given by:

Sn=𝐆[Wn;Vn],n[N].S^{n}=\mathbf{G}\cdot[W^{n};V^{n}],\ \ \forall n\in[N]. (2)

where 𝐆\mathbf{G} is a (Λt)×(Λt)\binom{\Lambda}{t}\times\binom{\Lambda}{t}-Cauchy matrix over 𝔽2m\mathbb{F}_{2^{m}} with 2m2(Λt)2^{m}\geq 2\binom{\Lambda}{t} as described in [7]. A Cauchy matrix has the property that all its sub-matrices have full rank. This ensures that no information about WnW^{n} gets revealed by any subset of (Λ1t1)\binom{\Lambda-1}{t-1} or less elements of SnS^{n}. Now, consider the collection of sets 𝔗:={𝒯[Λ],|𝒯|=t}\mathfrak{T}:=\{\mathcal{T}\subset[\Lambda],|\mathcal{T}|=t\}, where 𝒯\mathcal{T} represents the indices of helper caches in which SnS^{n} is placed. These (Λt)\binom{\Lambda}{t} subsets are arranged in lexicographic order, thus establishing a one-to-one mapping, ϕ:𝔗[(Λt+1)]\phi:\mathfrak{T}\mapsto\big{[}\binom{\Lambda}{t+1}\big{]}. Assign S𝒯nSinS^{n}_{\mathcal{T}}\leftarrow S^{n}_{i} where i=ϕ(𝒯)i=\phi(\mathcal{T}). Then, the following equations are satisfied:

  1. (i)

    H(Wn|Sn)=0.H(W^{n}|S^{n})=0.

  2. (ii)

    H(Wn|S𝒯n)=H(Wn)I(Wn;S𝒯n)=0.H(W^{n}|S^{n}_{\mathcal{T}})=H(W^{n})\implies I(W^{n};S^{n}_{\mathcal{T}})=0.

(i) implies that, given all the (Λt)\binom{\Lambda}{t} shares, a file can be completely recovered. (ii) implies that any collection of (Λ1t1)\binom{\Lambda-1}{t-1} shares of a given file reveal no information about that file.

III-A2 Placement of shares

The helper caches are filled with these shares over a private link between the caches and the server. Similar to the idea of placing a sub-file in a cache, as in [1], [8], a share is placed in a cache, if the cache index λ𝒯\lambda\in\mathcal{T}. Precisely, the following content is cached in cache λ\lambda:

Zλ={S𝒯n:λ𝒯, n[N]}.Z_{\lambda}=\{S^{n}_{\mathcal{T}}:\lambda\in\mathcal{T},\forall\mbox{ }n\in[N]\}.

The memory occupied at each helper cache is therefore, N(Λ1t1)B(Λ1t)=NtΛtB\frac{N\binom{\Lambda-1}{t-1}B}{\binom{\Lambda-1}{t}}=\frac{Nt}{\Lambda-t}B bits. Thus, M=NtΛtM=\frac{Nt}{\Lambda-t}.

III-A3 User to cache assignment

After the caches are filled, the user to cache association is revealed to the server in the user to cache assignment phase. This knowledge is used by the server to design the subsequent steps 4)-5).

III-A4 Placement of keys

At this stage, the server knows 𝒰\mathcal{U}, and hence derives the profile 𝓛\boldsymbol{\mathcal{L}}. It generates r=1Λtr(Λrt)\sum_{r=1}^{\Lambda-t}\mathcal{L}_{r}\binom{\Lambda-r}{t} unique keys, uniformly from 𝔽2Fs\mathbb{F}_{2^{F_{s}}} and independent of the files and shares. As indicated earlier, each multicast message is XOR’ed with a key. Hence, the group of users getting served by this multicast transmission needs to store this key privately. Note that, to take advantage of the overlap in shares placed at the helper caches, all users connected to different helper caches should be served in a given round of transmission [8]. To achieve this, 𝒰1(1),𝒰2(1),𝒰Λ(1)\mathcal{U}_{1}(1),\mathcal{U}_{2}(1),\ldots\mathcal{U}_{\Lambda}(1) are served in the first round; 𝒰1(2),𝒰2(2),,𝒰α(2)\mathcal{U}_{1}(2),\mathcal{U}_{2}(2),\ldots,\mathcal{U}_{\alpha}(2) are served in the second round, where α=maxλ[Λ]{λ:2λ}\alpha=\max_{\lambda\in[\Lambda]}\{\lambda:2\leq\mathcal{L}_{\lambda}\} and so on. This way, there are 1\mathcal{L}_{1} rounds, and the set of users getting served in round jj is j=λ[Λ]{𝒰λ(j):jλ}\mathcal{R}_{j}=\bigcup_{\lambda\in[\Lambda]}\{\mathcal{U}_{\lambda}(j):j\leq\mathcal{L}_{\lambda}\}. Consider the set 𝔔:={𝒬[Λ]:|𝒬|=t+1}\mathfrak{Q}:=\{\mathcal{Q}\subseteq[\Lambda]:|\mathcal{Q}|=t+1\}. For each j[1]j\in[\mathcal{L}_{1}] and 𝒬𝔔\mathcal{Q}\in\mathfrak{Q}, determine:

χ𝒬=λ𝒬{𝒰λ(j):jλ}.\chi_{\mathcal{Q}}=\bigcup_{\lambda\in\mathcal{Q}}\{\mathcal{U}_{\lambda}(j):j\leq\mathcal{L}_{\lambda}\}. (3)

Due to the non-uniformity in 𝓛\boldsymbol{\mathcal{L}}, more than one 𝒬\mathcal{Q} may yield the same χ𝒬\chi_{\mathcal{Q}} (see Example 2) in a given round. Let lχ𝒬l_{\chi_{\mathcal{Q}}} denote the number of times that χ𝒬\chi_{\mathcal{Q}} appears. The server delivers a unique multicast message for every occurrence of χ𝒬\chi_{\mathcal{Q}}, each XOR’ed with a unique key. This key is denoted by Tχ𝒬lT^{l}_{\chi_{\mathcal{Q}}}, where l[lχ𝒬]l\in[l_{\chi_{\mathcal{Q}}}]. For simplicity of notation, we omit the superscript ll from Tχ𝒬lT^{l}_{\chi_{\mathcal{Q}}} if lχ𝒬=1l_{\chi_{\mathcal{Q}}}=1. This gives:

𝒵k={S𝒯n:λ𝒯n[N]}{Tχ𝒬ll[lχ𝒬],kχ𝒬}.\mathcal{Z}_{k}=\{S^{n}_{\mathcal{T}}:\lambda\in\mathcal{T}\,\forall\,n\in[N]\}\bigcup\{T^{l}_{\chi_{\mathcal{Q}}}\ \forall l\in[l_{\chi_{\mathcal{Q}}}],k\in\chi_{\mathcal{Q}}\}.

To ensure decodability, the memory occupied by the keys in a user cache should be equal to the number of missing shares that the user needs to reconstruct the requested file. This is equal to (Λt)(Λ1t1)=(Λ1t)=BFs\binom{\Lambda}{t}-\binom{\Lambda-1}{t-1}=\binom{\Lambda-1}{t}=\frac{B}{F_{s}} messages, each message being of FsF_{s} bits. Hence, the memory of atleast BB bits or 1 file is needed at every user cache to guarantee secrecy requirement.

III-A5 Transmission of messages

Having received the demand vector 𝐝\mathbf{d}, the server transmits in 1\mathcal{L}_{1} rounds. Each round can be viewed as a dedicated caching network [1] with Λ\Lambda users and caching parameter tt. Hence the transmission scheme involves creating (Λt+1)\binom{\Lambda}{t+1} subsets 𝒬\mathcal{Q} of caches, each corresponding to a multicast message. For each 𝒬\mathcal{Q}, we create the set of users χ𝒬\chi_{\mathcal{Q}} as defined in (3). However, in round jj, only users in j\mathcal{R}_{j} are involved, hence some of the χ𝒬\chi_{\mathcal{Q}} are empty. No transmission occurs if χ𝒬=ϕ\chi_{\mathcal{Q}}=\phi. If χ𝒬\chi_{\mathcal{Q}} is non-empty, then it corresponds to a transmission formed by XOR’ing the shares and the unique key, Tχ𝒬ll[lχ𝒬]T^{l}_{\chi_{\mathcal{Q}}}\forall l\in[l_{\chi_{\mathcal{Q}}}] shared by the users in χ𝒬\chi_{\mathcal{Q}} which is:

xχ𝒬l=λ𝒬:𝒰λ(j)χ𝒬S𝒬{λ}d𝒰λ(j)Tχ𝒬l.x^{l}_{\chi_{\mathcal{Q}}}=\bigoplus_{\lambda\in\mathcal{Q}:\mathcal{U}_{\lambda}(j)\in\chi_{\mathcal{Q}}}S^{d_{\mathcal{U}_{\lambda}(j)}}_{\mathcal{Q}\setminus\{\lambda\}}\oplus T^{l}_{\chi_{\mathcal{Q}}}. (4)

Therefore, combining all rounds, the overall transmitted message vector for profile 𝓛\boldsymbol{\mathcal{L}} is

X𝐝(𝓛)=j[1],𝒬[Λ]:|𝒬|=t+1(l[lχ𝒬]xχ𝒬l).X_{\mathbf{d}}(\boldsymbol{\mathcal{L}})=\bigcup_{j\in[\mathcal{L}_{1}],\mathcal{Q}\subseteq[\Lambda]:|\mathcal{Q}|=t+1}\bigg{(}\bigcup_{l\in[l_{\chi_{\mathcal{Q}}}]}x^{l}_{\chi_{\mathcal{Q}}}\bigg{)}. (5)

III-B When t=0t=0

In this case, the helper caches do not prefetch any content, and the user to cache assignment phase is not required. The server randomly generates KK unique keys, each having the size of a file. We denote the keys as Tk,k[K]T_{k},k\in[K] and store TkT_{k} at user cache kk. Once the demands are revealed, the requested files are simply transmitted by XOR’ing each file with the key corresponding to the user who demanded it. Thus, X𝐝=WdkTk,k[K]X_{\mathbf{d}}=W^{d_{k}}\oplus T_{k},\forall k\in[K]. The worst case secretively achievable rate for t=M=0t=M=0, is given by Rs(𝓛)=KR^{s}(\boldsymbol{\mathcal{L}})=K, for all 𝓛\boldsymbol{\mathcal{L}}.

III-C Calculation of Rate

From Section III-A5, note that the server omits the transmissions corresponding to empty χ𝒬\chi_{\mathcal{Q}}, which for round jj are (Λ|j|t+1)\binom{\Lambda-|\mathcal{R}_{j}|}{t+1}. Thus, the effective number of transmissions is:

(Λt+1)(Λ|j|t+1),j[1],\binom{\Lambda}{t+1}-\binom{\Lambda-|\mathcal{R}_{j}|}{t+1},\ \forall j\in[\mathcal{L}_{1}], (6)

each of which is the size of a share FsF_{s} bits. On summing over all the 1\mathcal{L}_{1} rounds, and normalizing by the file size BB bits, the secretively achievable rate is:

Rs(𝓛)=(Λ|j|t+1)(Λ1t)=ar=1Λtr(Λrt)(Λ1t).R^{s}(\boldsymbol{\mathcal{L}})=\frac{\binom{\Lambda-|\mathcal{R}_{j}|}{t+1}}{\binom{\Lambda-1}{t}}\stackrel{{\scriptstyle a}}{{=}}\frac{\sum_{r=1}^{\Lambda-t}\mathcal{L}_{r}\binom{\Lambda-r}{t}}{\binom{\Lambda-1}{t}}. (7)

where =a\stackrel{{\scriptstyle a}}{{=}} is obtained on performing the steps in (91), Appendix-G of [8].

Example 2: Consider a network with N=8,K=8,Λ=4N=8,K=8,\Lambda=4, M=8M=8 and Mk1,k[8]M_{k}\geq 1,\ \forall k\in[8]; t=ΛMM+N=2t=\frac{\Lambda M}{M+N}=2; 𝒰1={1,2,3}\mathcal{U}_{1}=\{1,2,3\}, 𝒰2={4,5}\mathcal{U}_{2}=\{4,5\}, 𝒰3={6,7}\mathcal{U}_{3}=\{6,7\}, 𝒰4={8}\mathcal{U}_{4}=\{8\}. Hence, 𝓛=(3,2,2,1)\boldsymbol{\mathcal{L}}=(3,2,2,1).
Placement: The files W1,,W8W^{1},\dots,W^{8} are each encoded using a (3,6)(3,6) non-perfect secret sharing scheme to generate shares S1,2n,S1,3n,S1,4n,S2,3n,S2,4n,S3,4nS^{n}_{1,2},S^{n}_{1,3},S^{n}_{1,4},S^{n}_{2,3},S^{n}_{2,4},S^{n}_{3,4} n[8]\forall n\in[8], with size Fs=1/3rdF_{s}={1/3}^{rd} of file size. The shares and keys accessible to the users are:

𝒵1\displaystyle\mathcal{Z}_{1} ={S1,2n,S1,3n,S1,4n,T{1,4,6},T{1,4,8},T{1,6,8}}.\displaystyle=\{S^{n}_{1,2},S^{n}_{1,3},S^{n}_{1,4},T_{\{1,4,6\}},T_{\{1,4,8\}},T_{\{1,6,8\}}\}.
𝒵2\displaystyle\mathcal{Z}_{2} ={S1,2n,S1,3n,S1,4n,T{2,5,7},T{2,5},T{2,7}}.\displaystyle=\{S^{n}_{1,2},S^{n}_{1,3},S^{n}_{1,4},T_{\{2,5,7\}},T_{\{2,5\}},T_{\{2,7\}}\}.
𝒵3\displaystyle\mathcal{Z}_{3} ={S1,2n,S1,3n,S1,4n,T{3}1,T{3}2,T{3}3}.\displaystyle=\{S^{n}_{1,2},S^{n}_{1,3},S^{n}_{1,4},T^{1}_{\{3\}},T^{2}_{\{3\}},T^{3}_{\{3\}}\}.
𝒵4\displaystyle\mathcal{Z}_{4} ={S1,2n,S2,3n,S2,4n,T{1,4,6},T{1,4,8},T{4,6,8}}.\displaystyle=\{S^{n}_{1,2},S^{n}_{2,3},S^{n}_{2,4},T_{\{1,4,6\}},T_{\{1,4,8\}},T_{\{4,6,8\}}\}.
𝒵5\displaystyle\mathcal{Z}_{5} ={S1,2n,S2,3n,S2,4n,T{2,5,7},T{2,5},T{5,7}}.\displaystyle=\{S^{n}_{1,2},S^{n}_{2,3},S^{n}_{2,4},T_{\{2,5,7\}},T_{\{2,5\}},T_{\{5,7\}}\}.
𝒵6\displaystyle\mathcal{Z}_{6} ={S1,3n,S2,3n,S3,4n,T{1,4,6},T{1,4,6},T{1,6,8}}.\displaystyle=\{S^{n}_{1,3},S^{n}_{2,3},S^{n}_{3,4},T_{\{1,4,6\}},T_{\{1,4,6\}},T_{\{1,6,8\}}\}.
𝒵7\displaystyle\mathcal{Z}_{7} ={S1,3n,S2,3n,S3,4n,T{2,5,7},T{2,7},T{5,7}}.\displaystyle=\{S^{n}_{1,3},S^{n}_{2,3},S^{n}_{3,4},T_{\{2,5,7\}},T_{\{2,7\}},T_{\{5,7\}}\}.
𝒵8\displaystyle\mathcal{Z}_{8} ={S1,4n,S2,4n,S3,4n,T{1,4,8},T{1,4,8},T{1,6,8}}.\displaystyle=\{S^{n}_{1,4},S^{n}_{2,4},S^{n}_{3,4},T_{\{1,4,8\}},T_{\{1,4,8\}},T_{\{1,6,8\}}\}.

Delivery: Let the demand vector 𝐝=(1,2,3,4,5,6,7,8)\mathbf{d}=(1,2,3,4,5,6,7,8). Since, 1=3\mathcal{L}_{1}=3, there are three rounds of transmission. The multicast messages are listed in Table I wherein jj refers to the round of transmission.

TABLE I: Server transmissions
[Uncaptioned image]

Using these 1111 transmissions, the demands of all the users are satisfied. Since each transmission comprises B/3B/3 bits, the secretively achievable rate is Rs(𝓛)=Rs((3,2,2,1))=11/3R^{s}(\boldsymbol{\mathcal{L}})=R^{s}((3,2,2,1))=11/3 units.

III-D Proof of Correctness

Each transmitted message serves the users in the corresponding set χ𝒬\chi_{\mathcal{Q}} who have the key Tχ𝒬l,l[lχ𝒬]T^{l}_{\chi_{\mathcal{Q}}},l\in[l_{\chi_{\mathcal{Q}}}]. As a result, they can recover a linear combination of shares after XOR’ing the received message with the key. In this linear combination, except the missing share of the requested file, all other shares are available to the user via the helper cache. On receiving (Λ1t1)\binom{\Lambda-1}{t-1} such distinct transmissions, the user can obtain the missing shares. The reconstruction of the requested file (secret) is possible because all the (Λt)\binom{\Lambda}{t} shares are now available to the user and the Cauchy matrix 𝐆\mathbf{G} is invertible.

III-E Proof of Secrecy

By the structure of the ((Λ1t1),(Λt))\big{(}\binom{\Lambda-1}{t-1},\binom{\Lambda}{t}\big{)} secret-sharing encoding, the users obtain no information about the files from the helper caches. Furthermore, the unique keys stored are also independent of the files. Thus, there is no information leakage from the caches. For each transmission, the users who do not have the key, cannot decipher the message. Therefore, users connected to a given helper cache, cannot eavesdrop on the messages meant for their neighbour connected to the same helper cache. Thus, (1) is satisfied.

IV Main Results

Theorem 1

In the KK- user shared link broadcast channel with Λ\Lambda helper caches and KK user caches, of normalized size MM and Mk=1k[K]M_{k}=1\ \forall k\in[K] respectively, with t=ΛMM+N{0,1,,Λ1}t=\frac{\Lambda M}{M+N}\in\{0,1,\ldots,\Lambda-1\} the following rate Rs(𝓛)R^{s}(\boldsymbol{\mathcal{L}}) is secretively achievable within a profile 𝓛\boldsymbol{\mathcal{L}}:

Rs(𝓛)=r=1Λtr(Λrt)(Λ1t).R^{s}(\boldsymbol{\mathcal{L}})=\frac{\sum_{r=1}^{\Lambda-t}\mathcal{L}_{r}\binom{\Lambda-r}{t}}{\binom{\Lambda-1}{t}}. (8)

For general M[0,N(Λ1)]M\in[0,{N(\Lambda-1)}], the lower convex envelope of these points is secretively achievable, through memory sharing.

Proof:

The proof follows from the proposed scheme in Section III and the rate calculation in (6) and (7). ∎

Remark 1

User cache size MkM_{k} of 11 unit k[K]\forall k\in[K] is sufficient for the feasibility of our scheme. If Mk>1M_{k}>1, the remaining memory stays unoccupied, and the scheme continues to work. If Mk<1M_{k}<1, the keys required by user kk cannot be stored and the multicast messages useful to the user cannot be decoded. To ensure decodability without storing the keys, the messages need to be transmitted without XOR’ing them with keys, which violates (1). As long as all user caches have Mk1M_{k}\geq 1, our scheme continues to work.

In the context of secretive (non-secretive) coded caching, the caching parameter, tt indicates the number of helper caches in which a given share (sub-file) is placed. Fig. 2 depicts the rate vs tt plots for shared cache setting with and without secrecy imposed. The secretively achievable rate is higher than the non-secretively achievable rate t\forall t except when t=0t=0. This is because, the former involves shares, instead of sub-files in the placement and delivery phases. The shares consume larger number of bits compared to sub-files due to the randomness introduced in the form of encryption keys.

Refer to caption
Figure 2: Transmission rates with and without secrecy for shared cache setup with N=K=30N=K=30 and Λ=6\Lambda=6 for profile 𝓛=(13,8,4,2,2,1)\boldsymbol{\mathcal{L}}=(13,8,4,2,2,1)
Remark 2

For uniform user association profile, where Λ|K\Lambda|K, denoted by 𝓛unif=(KΛ,KΛ,,KΛ)\boldsymbol{\mathcal{L}_{\text{unif}}}=(\frac{K}{\Lambda},\frac{K}{\Lambda},\ldots,\frac{K}{\Lambda}), the secretively achievable rate is obtained by substituting r\mathcal{L}_{r} in (8) as:

Rs(𝓛unif)=KΛr=1Λt(Λrt)(Λ1t)=KΛ(Λt+1)(Λ1t)=Kt+1.R^{s}(\boldsymbol{\mathcal{L}}_{\text{unif}})\ =\frac{\frac{K}{\Lambda}\sum_{r=1}^{\Lambda-t}\binom{\Lambda-r}{t}}{\binom{\Lambda-1}{t}}\stackrel{{\scriptstyle*}}{{=}}\frac{\frac{K}{\Lambda}\binom{\Lambda}{t+1}}{\binom{\Lambda-1}{t}}=\frac{K}{t+1}. (9)

where =\stackrel{{\scriptstyle*}}{{=}} follows from the Hockey-stick identity. Further, when Λ=K\Lambda=K, and 𝓛MN=(1,1,,1)\boldsymbol{\mathcal{L}_{\text{MN}}}=(1,1,\dots,1), the secretively achievable rate coincides with that in [3].

Proof:

The proof of the second part of the remark is as follows. In the general achievable scheme presented in [3], the secretively achievable rate Rs(M)R^{s}(M) is given by:

Rs(M)=K(N+M1)N+(M1)(K+1).R^{s}(M)=\frac{K(N+M-1)}{N+(M-1)(K+1)}. (10)

In a shared cache system, when the profile is 𝓛MN\boldsymbol{\mathcal{L}_{\text{MN}}}, each user has a distinct helper cache of size MM units and the size of the user cache occupied by keys as 11 unit. This is equivalent to a dedicated cache network where each user has access to a local cache of M+1M+1 units. Hence, Rs(𝓛MN)=K(N+M)N+M(K+1)R^{s}(\boldsymbol{\mathcal{L}_{\text{MN}}})=\frac{K(N+M)}{N+M(K+1)} obtained by substituting t=KMM+Nt=\frac{KM}{M+N} in (9) matches the secretively achievable rate Rs(M+1)R^{s}(M+1) obtained by setting MM+1M\leftarrow M+1 in (10). ∎

Remark 3

When all the users are connected to the same cache, meaning 𝓛=(K,0,,0)\boldsymbol{\mathcal{L}}=(K,0,\dots,0), then the secretively achievable rate is always KK, irrespective of the helper cache size. Hence, the purpose of caching is defeated.

Remark 4

The secretively achievable scheme exhibits minimum transmission rate for uniform profile, i.e., when 𝓛=𝓛unif\boldsymbol{\mathcal{L}=\mathcal{L}_{\text{unif}}}. As illustrated in Fig. 3, the rate with 𝓛=(5,5,5,5,5,5)\boldsymbol{\mathcal{L}}=(5,5,5,5,5,5) is lower than any non-uniform profile.

Proof:

The proof follows directly from Corollary 1 of [8].

Refer to caption
Figure 3: Secretively achievable rate Rs(𝓛)R^{s}(\boldsymbol{\mathcal{L}}) vs memory required M+1M+1 (helper cache + user cache occupied with keys) for different user association profiles with N=K=30N=K=30 and Λ=6.\Lambda=6.

Intuitively, the total number of missing shares is the same irrespective of 𝓛\boldsymbol{\mathcal{L}}. But, the number of users served in a single transmission with 𝓛unif\boldsymbol{\mathcal{L}}_{\text{unif}} is always t+1t+1, where tt is the number of helper caches in which a given share is placed. This way, the uniform profile exploits full multicasting gain, thus requiring minimum number of transmissions.

V Conclusion

In this letter, we proposed a secretively achievable coded caching scheme with shared caches, and showed that when our scheme is applied to the dedicated cache network, the general achievable scheme presented in [3] is recovered. In our solution, we indicated that the users associated to the same cache act as potential eavesdroppers, thus violating the secrecy. Therefore, we secure each message in the delivery phase with a unique key. The secretively achievable rate varies with the user association profile 𝓛\boldsymbol{\mathcal{L}}, and is minimum in the case of uniform profile 𝓛unif\boldsymbol{\mathcal{L}_{\text{unif}}}. However, testing the optimality of our scheme remains open. Finding an information theoretic lower bound on the optimal rate is an interesting direction to pursue.

References

  • [1] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
  • [2] V. Ravindrakumar, P. Panda, N. Karamchandani, and V. Prabhakaran, “Fundamental limits of secretive coded caching,” in Proc. IEEE Int. Symp. Inf. Theory, Barcelona, Spain, July 2016, pp. 425–429.
  • [3] V. Ravindrakumar, P. Panda, N. Karamchandani, and V. M Prabhakaran, “Private coded caching,” IEEE Trans. Inf. Forensics Security, vol. 13, no. 3, pp. 685–694, Mar. 2018.
  • [4] A. Sengupta, R. Tandon, T. C. Clancy, “Fundamental limits of caching with secure delivery,” IEEE Trans. Inf. Forensics Security, vol. 10, no. 2, pp. 355–370, Feb. 2015.
  • [5] A. A. Zewail and A. Yener, “Device-to-device secure coded caching,” IEEE Trans. Inf. Forensics Security, vol. 15, pp. 1513–1524, 2020.
  • [6] ——, “Combination networks with or without secrecy constraints: the impact of caching relays,” IEEE J. Sel. Areas Commun., vol. 36, no. 6, pp. 1140–1152, June 2018.
  • [7] K. Ma and S. Shao, “Secure coded caching with colluding users,” Oct. 2019, arXiv:1910.08268[cs.IT]. [Online]. Available: https://arxiv.org/abs/1910.08268
  • [8] E. Parrinello, A. Ünsal and P. Elia, “Fundamental limits of coded caching with multiple antennas, shared caches and uncoded prefetching,” IEEE Trans. Inf. Theory, vol. 66, no. 4, pp. 2252–2268, April 2020.
  • [9] A. M. Ibrahim, A. A. Zewail and A. Yener, “Coded placement for systems with shared caches,” in Proc. IEEE Int. Conf. Commun., Shanghai, China, May 2019, pp. 1–6.
  • [10] R. Cramer, I. Damgård, and J. Nielsen, Secure multiparty computation and secret sharing, Cambridge University Press, 2015.
  • [11] O. Farràs, T. B. Hansen, T. Kaced and C. Padró, “On the information ratio of non-perfect secret sharing Schemes,”, Algorithmica, vol. 79, pp. 987–1013, 2016.