This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Combinatorial Multi-Access Coded Caching: Improved Rate-Memory Trade-off with
Coded Placement

K. K. Krishnan Namboodiri, and B. Sundar Rajan This work was supported partly by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J.C. Bose National Fellowship to Prof. B. Sundar Rajan, and by the Ministry of Human Resource Development (MHRD), Government of India, through Prime Minister’s Research Fellowship (PMRF) to K. K. Krishnan Namboodiri. Part of the content of this manuscript appeared in Proc. IEEE Inf. Theory Workshop (ITW), Apr. 2023, doi: 10.1109/ITW55543.2023.10161686 [39]. K. K. Krishnan Namboodiri and B. Sundar Rajan are with the Department of Electrical Communication Engineering, IISc Bangalore, India (e-mail: [email protected], [email protected]).
Abstract

This work considers the combinatorial multi-access coded caching problem introduced in the recent work by Muralidhar et al. [P. N. Muralidhar, D. Katyal, and B. S. Rajan, “Maddah-Ali-Niesen scheme for multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2021] The problem setting consists of a central server having a library of NN files and CC caches each with capacity MM. Each user in the system can access a unique set of r<Cr<C caches, and there exist users corresponding to every distinct set of rr caches. Therefore, the number of users in the system is (Cr)\binom{C}{r}. For the aforementioned combinatorial multi-access setting, we propose a coded caching scheme with an MDS code-based coded placement. This novel placement technique helps to achieve a better rate in the delivery phase compared to the optimal scheme under uncoded placement when M>N/CM>N/C. For a lower memory regime, we present another scheme with coded placement, which outperforms the optimal scheme under uncoded placement if the number of files is no more than the number of users. Further, we derive an information-theoretic lower bound on the optimal rate-memory trade-off of the combinatorial multi-access coded caching scheme. In addition, using the derived lower bound, we show that the first scheme is optimal in the higher memory regime, and the second scheme is optimal if N(Cr)N\leq\binom{C}{r}. Finally, we show that the performance of the first scheme is within a constant factor of the optimal performance, when r=2r=2.

Index Terms:
Coded caching, coded placement, combinatorial multi-access network, lower bound, rate-memory trade-off

I Introduction

With smartphones, mobile applications and expanded connectivity, people are consuming more video content than ever. This increase in mobile video consumption is taking a toll on the internet, which has seen data traffic increase many-fold in a few years. However, the high temporal variability of on-demand video services leaves the network underutilized during off-peak hours. Utilizing the channel resources by employing low-cost cache memory to store contents at the user end during off-peak times is an effective way to ease the network traffic congestion in peak hours. In the conventional caching scheme, the users’ demands are met by filling the caches during the placement phase (in off-peak hours without knowing the user demands) and by transmitting the remaining requested file contents in the uncoded form during the delivery phase (in the peak hours after knowing the user demands). In coded caching, introduced in [1], it was shown that by employing coding in the delivery phase among the requested contents, it is possible to bring down the network load further compared to the conventional caching scheme. In [1], Maddah-Ali and Niesen considered a single server broadcast network, where the server is connected to KK users through an error-free shared link. The server has a library of NN files, and the user caches have a capacity of MM files, where MNM\leq N (since each user is equipped with a dedicated cache memory, we refer to this network as the dedicated cache network). During the placement phase, the server stores file contents (equivalent to MM files) in the caches without knowing the user demands. In the delivery phase, each user requests a single file from the server. In the delivery phase, the server makes coded transmissions so that the users can decode their demanded files by making use of the cache contents and the coded transmissions. The goal of the coded caching problem is to jointly design the placement and the delivery phases such that the rate (the size of transmission) in the delivery phase is minimized. The scheme in [1] achieved a rate K(1M/N)/(1+KM/N)K(1-M/N)/(1+KM/N), which is later shown to be the optimal rate under uncoded placement if NKN\geq K [2, 3]. In [4], it was shown that by employing coding among the contents stored in the placement phase in addition to the coding in the delivery phase, a better rate-memory trade-off could be achieved. By using coded placement, further improvement in the rate was obtained by the schemes in [5, 6, 7, 8, 9]. The works in [10, 11, 12, 13, 14], came up with different converse bounds for the optimal rate-memory trade-off of the coded caching scheme for the dedicated cache network.

Motivated by different practical scenarios, coded caching was extended to various network models, including multi-access networks [15], shared cache networks [16] etc. In the multi-access coded caching (MACC) scheme proposed in [15], the number of users, KK was kept to be the same as the number of caches. Further, each user was allowed to access r<Kr<K neighbouring caches in a cyclic wrap-around fashion. The coded caching scheme under the cyclic-wrap model was studied in [17, 18, 19, 20, 21, 22]. Those works proposed different achievable schemes, all restricted to uncoded cache placement. In [23], the authors proposed a coded caching scheme with an MDS code-based coded placement technique and achieved a better rate-memory trade-off in the low memory regime compared to the schemes with uncoded placement. Further, by deriving an information-theoretic converse, the scheme in [23] was shown to be optimal in [24]. The MACC problem (in the cyclic wrap-around network) was studied by incorporating security and privacy constraints in [25, 26]. The first work that considered a multi-access network with more users than the caches is [27]. The authors established a connection between the MACC problem and design theory, and obtained classes of MACC schemes from resolvable designs. The works in [28, 29] also relied on design theory to obtain MACC schemes.

In this paper, we consider the combinatorial multi-access network model introduced in [30], which consists of CC caches and K=(Cr)K=\binom{C}{r} users, where rr is the number of caches accessed by a user. The combinatorial MACC scheme presented in [30] achieves the rate R=(Ct+r)/(Ct)R=\binom{C}{t+r}/\binom{C}{t} at cache memory M=Nt/CM=Nt/C, where t{1,2,,Cr+1}t\in\{1,2,\dots,C-r+1\}. Here onwards, we refer to the scheme in [30] as the MKR scheme. The optimality of the MKR scheme under uncoded placement was shown in [31]. In addition to that, in [31], the system model in [30] was generalized to a setting where more than one user can access the same set of rr caches. In [32], the authors addressed the combinatorial MACC problem with privacy and security constraints.

Similar network topologies were considered in various cache aided settings, including two-hop networks [33, 34] and multi-server networks [35]. The two-hop network considered in [33] and [34] consists of a server and KK users connected via a set of hh cache-aided relay nodes. Each user is connected to a distinct set of ρ\rho relay nodes, where ρ<h\rho<h, and thus the number of user is K=(hρ)K=\binom{h}{\rho}. Those users also have dedicated caches. The scheme in [33] followed an uncoded random placement strategy, whereas the scheme in [34] relied on MDS-coded placement. The main difference between the combinatorial MACC scheme and the coded caching scheme for the two-hop network is in the delivery phase. In the combinatorial MACC setting, the server communicates directly to the users over an error-free broadcast link. However, in two-hop networks, the server responds to the users’ requests by transmitting signals to each of the relay nodes. Then, each relay node utilizes its received signal and its cached contents to transmit unicast signals to each of its connected end users. In [35], a network model with PP servers and KK cache-aided users is considered. Each user can get connected to a random set of ρ\rho servers, where ρP\rho\leq P. In order to ensure that any ρ\rho servers collectively store the entire file library, the file contents are stored in the servers after an MDS encoding.

Refer to caption
Figure 1: (C,r,M,N)(C,r,M,N) Combinatorial multi-access Network.

I-A Contributions

In this paper, we study the combinatorial multi-access setting introduced in [30] and make the following technical contributions:

  • In [30], the authors proposed a combinatorial MACC scheme (MKR scheme) that achieves the optimal rate under uncoded placement. However, in the MKR scheme, a user gets multiple copies of the same subfile from different caches that it accesses. In order to remove that redundancy, we introduce a novel, MDS code-based coded placement technique. By employing coded placement, we ensure that using the accessible cache contents, a user can decode all the subfiles that the corresponding user in the MKR scheme (by corresponding user, we mean that the user accessing the same set of caches) accesses in the uncoded form. The coded subfiles transmitted in the delivery phase of our proposed scheme (Scheme 1) remain the same as the transmissions in the MKR scheme. Thereby, our proposed scheme in Theorem 1 achieves the same rate as the MKR scheme at a lesser cache memory value. For M>N/CM>N/C, Scheme 1 outperforms the MKR scheme in terms of the rate achieved.

  • For 0MN/C0\leq M\leq N/C, Scheme 1 has the same rate as the MKR scheme. Thus we present another achievability result for the combinatorial MACC scheme in Theorem 2. The new coding scheme (Scheme 2) achieves the rate NCMN-CM for 0MN(C1r)C0\leq M\leq\frac{N-\binom{C-1}{r}}{C}, which is strictly less than the rate achieved by the optimal scheme with uncoded placement when the number of files with the server is no more than the number of users in the system. In effect, by employing coding in the placement phase, we brought down the rate (except at M=N/CM=N/C) further compared to the optimal scheme under uncoded placement.

  • We derive an information-theoretic lower bound on the optimal rate-memory trade-off of the combinatorial MACC scheme (Theorem 3). To obtain this lower bound, we consider a scenario of decoding the entire server library at the user-end using multiple transmissions. In order to bound the uncertainty in those transmissions, we make use of an information-theoretic inequality (Han’s inequality) given in [37]. Han’s inequality was originally used in the context of coded caching in [10, 11]. Our bounding technique is similar to the approach used in [10] and [24], where lower bounds were derived for the dedicated cache scheme and the MACC scheme with cyclic wrap-around, respectively. While deriving the lower bound, we do not impose any restriction on the placement phase. Thus, the lower bound is applicable even if the scheme employs coding in the placement phase.

  • By using the derived lower bound, we show that Scheme 1 is optimal for higher values of MM, specifically when MN(Cr)1(Cr)\frac{M}{N}\geq\frac{\binom{C}{r}-1}{\binom{C}{r}} (Theorem 4). Further, we show that Scheme 2 is optimal when N(Cr)N\leq\binom{C}{r} (Theorem 5). In addition, we show that when r=2r=2 and NKN\geq K, the rate-memory trade-off given by Scheme 1 is within a constant multiplicative factor, 21, of the information-theoretic optimal rate-memory trade-off (Theorem 6).

I-B Notations

For a positive integer nn, [n][n] denotes the set {1,2,,n}\left\{1,2,\ldots,n\right\}. For two positive integers a,ba,b such that aba\leq b, [a:b]={a,a+1,,b}[a:b]=\{a,a+1,\ldots,b\}. For two non-negative integers n,mn,m, we have (nm)=n!m!(nm)!\binom{n}{m}=\frac{n!}{m!(n-m)!}, if nmn\geq m, and (nm)=0\binom{n}{m}=0 if n<mn<m. Uppercase letters (excluding the letters C,K,M,NC,K,M,N and RR) are used to denote random variables (the letters C,K,M,NC,K,M,N and RR are reserved for denoting system parameters). The set of random variables {Va,Va+1,,Vb}\{V_{a},V_{a+1},\dots,V_{b}\} is denoted as V[a:b]V_{[a:b]}. Calligraphic letters are used to denote sets. Further, for a set of positive integers \mathcal{I}, VV_{\mathcal{I}} represents a set of random variables indexed by the elements in \mathcal{I} (for instance, V{2,4,5}={V2,V4,V5}V_{\{2,4,5\}}=\{V_{2},V_{4},V_{5}\}). Bold lowercase letters represent vectors, and bold uppercase letters represent matrices. The identity matrix of size n×nn\times n is denoted as 𝐈n\mathbf{I}_{n}. Further, 𝔽q\mathbb{F}_{q} represents the finite field of size qq. Finally, for a real number zz, (z)+=max(0,z)(z)^{+}=\max(0,z).

The rest of the paper is organized as follows. Section II describes the system model, and Section III describes useful preliminaries. The main results on the achievability and converse are presented in Section IV. The proofs of the main results are given in Section V.

II System Model and Problem Formulation

The system model as shown in Fig. 1 consists of a central server with a library of NN independent files, W[1:N]{W1,W2,,WN}W_{[1:N]}\triangleq\{W_{1},W_{2},\dots,W_{N}\}, each of size 1 unit (we assume that 1 unit of a file is constituted by ff symbols from the finite field of size qq)111We assume that qq is such that all MDS codes considered in this work exist over 𝔽q\mathbb{F}_{q}.. There are CC caches, each with capacity MM units (0MN0\leq M\leq N). i.e., each cache can store contents equivalent to MM files. There are KK users in the system, and each of them can access rr out of the CC caches, where r<Cr<C. We assume that there exists a user corresponding to any choice of rr caches from the total CC caches. Thus the number of users in the system is K=(Cr)K=\binom{C}{r}. Further, each user is denoted with a set 𝒰\mathcal{U}, where 𝒰[C]\mathcal{U}\subset[C] such that |𝒰|=r|\mathcal{U}|=r. That is, a user is indexed with the set of caches that it accesses. In other words, user 𝒰\mathcal{U} has access to all the caches in the set 𝒰\mathcal{U}. A system under the aforementioned setting is called the (C,r,M,N)(C,r,M,N) combinatorial multi-access network. The coded caching scheme under this model was introduced in [30].

The (C,r,M,N)(C,r,M,N) combinatorial MACC scheme works in two phases, the placement phase and the delivery phase. In the placement phase, the server populates the caches with the file contents. The cache placement is done without knowing the demands of the users. The placement can be either coded or uncoded. By uncoded placement, we mean that files are split into subfiles and kept in the caches as such, while coded placement means that coded combinations of the subfiles are allowed to be kept in the caches. The number of subfiles into which a file is split is termed the subpacketization number. The contents stored in cache cc, c[C]c\in[C], is denoted as ZcZ_{c}. In the delivery phase, user 𝒰\mathcal{U} requests file Wd𝒰W_{d_{\mathcal{U}}} from the server, where d𝒰[N]d_{\mathcal{U}}\in[N], and the demand vector 𝐝=(d𝒰:𝒰[C],|𝒰|=r)\mathbf{d}=(d_{\mathcal{U}}:\mathcal{U}\subset[C],|\mathcal{U}|=r). In the demand vector, we arrange the users (subsets of size rr) in the lexicographic order. Corresponding to the demand vector, the server makes a transmission XX of size RR units. We assume that the broadcast channel from the server to the users is error-free. The non-negative real number RR is said to be the rate of transmission. Finally, user 𝒰\mathcal{U} should be able to decode Wd𝒰W_{d_{\mathcal{U}}} using transmission XX and the accessible cache contents Zc,c𝒰Z_{c},c\in\mathcal{U}. That is, for 𝒰[C]\mathcal{U}\subset[C] such that |𝒰|=r|\mathcal{U}|=r, we have

H(Wd𝒰|Z𝒰,X)=0H(W_{d_{\mathcal{U}}}|Z_{\mathcal{U}},X)=0 (1)

where Z𝒰={Zc:c𝒰}Z_{\mathcal{U}}=\{Z_{c}:c\in\mathcal{U}\}.

Definition 1.

For the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme, a rate RR is said to be achievable if the scheme satisfies (1) with a rate less than or equal to RR for every possible demand vector. Further, the optimal rate-memory trade-off is

R(M)=inf{R:R is achievable}.R^{*}(M)=\inf\{R:R\text{ is achievable}\}. (2)

The coded caching problem aims to design the placement and the delivery phases jointly so that the rate is minimized. In this work, we present two achievability results and a lower bound on R(M)R^{*}(M).

III Preliminaries

In this section, we discuss the preliminaries required for describing the coding scheme as well as the converse bound.

III-A Review of the combinatorial MACC scheme in [30] (MKR scheme)

In the sequel, we describe the combinatorial MACC scheme presented in [30].

In the placement phase, the server divides each file into (Ct)\binom{C}{t} non-overlapping subfiles of equal size, where tCM/N[C]t\triangleq CM/N\in[C]. The subfiles are indexed with tt-sized subsets of [C][C]. Therefore, we have, Wn={Wn,𝒯:𝒯[C],|𝒯|=t}W_{n}=\{W_{n,\mathcal{T}}:\mathcal{T}\subseteq[C],|\mathcal{T}|=t\} for all n[N]n\in[N]. The server fills cache cc as follows:

Zc={Wn,𝒯:𝒯c,𝒯[C],|𝒯|=t,n[N]}Z_{c}=\left\{W_{n,\mathcal{T}}:\mathcal{T}\ni c,\mathcal{T}\subseteq[C],|\mathcal{T}|=t,n\in[N]\right\} (3)

for every c[C]c\in[C]. According to the above placement, the server stores (C1t1)\binom{C-1}{t-1} subfiles of all the files in a cache. Thus, we have, M/N=(C1t1)/(Ct)=t/CM/N=\binom{C-1}{t-1}/\binom{C}{t}=t/C. Now, assume that user 𝒰\mathcal{U} demands file Wd𝒰W_{d_{\mathcal{U}}} from the server. In the delivery phase, the server makes coded transmissions corresponding to every 𝒮[C]\mathcal{S}\subseteq[C] such that |𝒮|=t+r|\mathcal{S}|=t+r. The server transmission is as follows:

𝒰𝒮|𝒰|=rWd𝒰,𝒮\𝒰.\bigoplus_{\begin{subarray}{c}\mathcal{U}\subseteq\mathcal{S}\\ |\mathcal{U}|=r\end{subarray}}W_{d_{\mathcal{U}},\mathcal{S}\backslash\mathcal{U}}. (4)

Therefore, the number of coded subfiles transmitted is (Ct+r)\binom{C}{t+r}, where each coded subfile is 1/(Ct)1/\binom{C}{t} of a file size. Thus the rate of transmission is (Ct+r)/(Ct)\binom{C}{t+r}/\binom{C}{t}.

Notice that user 𝒰\mathcal{U} gets subfile Wn,𝒯W_{n,\mathcal{T}} for all n[N]n\in[N] from the cache placement if 𝒰𝒯ϕ\mathcal{U}\cap\mathcal{T}\neq\phi. Now, let us see how does the user get subfile Wd𝒰,𝒯W_{d_{\mathcal{U}},\mathcal{T}} if 𝒰𝒯=ϕ\mathcal{U}\cap\mathcal{T}=\phi. Consider the transmission corresponding to the (t+r)(t+r)-sized set 𝒮=𝒰𝒯\mathcal{S}=\mathcal{U}\cup\mathcal{T}. In the coded message

𝒰𝒮|𝒰|=rWd𝒰,𝒮\𝒰=Wd𝒰,𝒯𝒰𝒮|𝒰|=r𝒰𝒰Wd𝒰,𝒮\𝒰\bigoplus_{\begin{subarray}{c}\mathcal{U}\subseteq\mathcal{S}\\ |\mathcal{U}|=r\end{subarray}}W_{d_{\mathcal{U}},\mathcal{S}\backslash\mathcal{U}}=W_{d_{\mathcal{U}},\mathcal{T}}\oplus\bigoplus_{\begin{subarray}{c}\mathcal{U^{\prime}}\subseteq\mathcal{S}\\ |\mathcal{U}^{\prime}|=r\\ \mathcal{U}^{\prime}\neq\mathcal{U}\end{subarray}}W_{d_{\mathcal{U}^{\prime}},\mathcal{S}\backslash\mathcal{U}^{\prime}}

user 𝒰\mathcal{U} has access to Wd𝒰,𝒮\𝒰W_{d_{\mathcal{U}^{\prime}},\mathcal{S}\backslash\mathcal{U}^{\prime}} for every 𝒰𝒰\mathcal{U}^{\prime}\neq\mathcal{U}, since |𝒰𝒮\𝒰|0|\mathcal{U}\cap\mathcal{S}\backslash\mathcal{U}^{\prime}|\neq 0. Therefore, user 𝒰\mathcal{U} can decode the demanded file Wd𝒰W_{d_{\mathcal{U}}}.

III-B Maximum distance separable (MDS) codes [36]

An [n,k][n,k] maximum distance separable (MDS) code is an erasure code that allows recovering the kk message/information symbols from any kk out of the nn coded symbols. Consider a systematic [n,k][n,k] MDS code (over the finite field 𝔽q\mathbb{F}_{q}) generator matrix 𝐆k×n=[𝐈k|𝐏k×nk]\mathbf{G}_{k\times n}=[\mathbf{I}_{k}|\mathbf{P}_{k\times n-k}]. Then, we have

[m1,m2,,mk,c1,c2,,cnk]=[m1,m2,,mk]𝐆\displaystyle[m_{1},m_{2},\dots,m_{k},c_{1},c_{2},\dots,c_{n-k}]=[m_{1},m_{2},\dots,m_{k}]\mathbf{G}

where the message vector [m1,m2,,mk]𝔽qk[m_{1},m_{2},\dots,m_{k}]\in\mathbb{F}_{q}^{k}.

III-C Han’s Inequality [37]

To derive a lower bound on R(M)R^{*}(M), we use the following lemma which gives an inequality on subset of random variables.

Lemma 1 (Han’s Inequality [37]).

Let V[1:m]={V1,V2,,Vm}V_{[1:m]}=\{V_{1},V_{2},\dots,V_{m}\} be a set of mm random variables. Further, let 𝒜\mathcal{A} and \mathcal{B} denote subsets of [1:m][1:m] such that |𝒜|=a|\mathcal{A}|=a and ||=b|\mathcal{B}|=b with aba\geq b. Then, we have

1(ma)𝒜[1:m]|𝒜|=aH(V𝒜)a1(mb)[1:m]||=bH(V)b\frac{1}{\binom{m}{a}}\sum_{\begin{subarray}{c}\mathcal{A}\subseteq[1:m]\\ |\mathcal{A}|=a\end{subarray}}\frac{H(V_{\mathcal{A}})}{a}\leq\frac{1}{\binom{m}{b}}\sum_{\begin{subarray}{c}\mathcal{B}\subseteq[1:m]\\ |\mathcal{B}|=b\end{subarray}}\frac{H(V_{\mathcal{B}})}{b} (5)

where V𝒜V_{\mathcal{A}} and VV_{\mathcal{B}} denote the set of random variables indexed by the elements in 𝒜\mathcal{A} and \mathcal{B}, respectively.

III-D Motivating Example

Even though the MKR scheme is optimal under uncoded placement, with an example, we show that further reduction in rate is possible with the help of coded placement.

Refer to caption
Figure 2: (4,2,M,N)(4,2,M,N) Combinatorial multi-access Network.

Consider the (C=4,r=2,M=3,N=6)(C=4,r=2,M=3,N=6) combinatorial multi-access network in Fig. 2. There are (42)=6\binom{4}{2}=6 users in the system, each denoted with a set 𝒰\mathcal{U}, where 𝒰[C]\mathcal{U}\subset[C] such that |𝒰|=2|\mathcal{U}|=2. Let us first see the MKR scheme, where the placement is uncoded. In the placement phase, the server divides each file into 6 non-overlapping subfiles of equal size. Further, each subfile is denoted with a set 𝒯\mathcal{T}, where 𝒯[C]\mathcal{T}\subset[C] such that |𝒯|=2|\mathcal{T}|=2. Thus, we have Wn={Wn,12,Wn,13,Wn,14,Wn,23,Wn,24,Wn,34}W_{n}=\{W_{n,12},W_{n,13},W_{n,14},W_{n,23},W_{n,24},W_{n,34}\} for all n[6]n\in[6]. Note that, even though the subfile indices are sets, we omitted the curly braces for simplicity. The server populates the kthk^{\text{th}} cache as follows:

Zk={Wn,𝒯:k𝒯,𝒯[C],|𝒯|=2}.Z_{k}=\{W_{n,\mathcal{T}}:k\in\mathcal{T},\mathcal{T}\subset[C],|\mathcal{T}|=2\}.

The contents stored in the caches are given in Table I. From the cache placement, user 𝒰\mathcal{U} has access to all the subfiles (of every file) except those indexed with 2-sized subsets of [C]\𝒰[C]\backslash\mathcal{U}. For example, user {1,2}\{1,2\} has access to all the subfiles except subfile Wn,34W_{n,34}.

Cache 1 Cache 2 Cache 3 Cache 4
Wn,12W_{n,12} Wn,12W_{n,12} Wn,13W_{n,13} Wn,14W_{n,14}
Wn,13W_{n,13} Wn,23W_{n,23} Wn,23W_{n,23} Wn,24W_{n,24}
Wn,14W_{n,14} Wn,24W_{n,24} Wn,34W_{n,34} Wn,34W_{n,34}
TABLE I: (C=4,r=2,M=3,N=6)(C=4,r=2,M=3,N=6): MKR scheme placement (subfiles are stored n[6]\forall n\in[6])
Cache 1 Cache 2 Cache 3 Cache 4
Wn,12(1)W_{n,12}^{(1)} Wn,12(2)W_{n,12}^{(2)} Wn,13(2)W_{n,13}^{(2)} Wn,14(2)W_{n,14}^{(2)}
Wn,13(1)W_{n,13}^{(1)} Wn,23(1)W_{n,23}^{(1)} Wn,23(2)W_{n,23}^{(2)} Wn,24(2)W_{n,24}^{(2)}
Wn,14(1)W_{n,14}^{(1)} Wn,24(1)W_{n,24}^{(1)} Wn,34(1)W_{n,34}^{(1)} Wn,34(2)W_{n,34}^{(2)}
Wn,12(2)+Wn,13(2)W_{n,12}^{(2)}+W_{n,13}^{(2)} Wn,12(1)+Wn,23(2)W_{n,12}^{(1)}+W_{n,23}^{(2)} Wn,13(1)+Wn,23(1)W_{n,13}^{(1)}+W_{n,23}^{(1)} Wn,14(1)+Wn,24(1)W_{n,14}^{(1)}+W_{n,24}^{(1)}
Wn,13(2)+Wn,14(2)W_{n,13}^{(2)}+W_{n,14}^{(2)} Wn,23(2)+Wn,24(2)W_{n,23}^{(2)}+W_{n,24}^{(2)} Wn,23(1)+Wn,34(2)W_{n,23}^{(1)}+W_{n,34}^{(2)} Wn,24(1)+Wn,34(1)W_{n,24}^{(1)}+W_{n,34}^{(1)}
TABLE II: (C=4,r=2,M=2.5,N=6)(C=4,r=2,M=2.5,N=6): Cache contents (subfiles are stored n[6]\forall n\in[6])

In the delivery phase, the users reveal their demands. Let Wd𝒰W_{d_{\mathcal{U}}} be the file demanded by user 𝒰\mathcal{U}. Then the server transmits Wd12,34+Wd13,24+Wd14,23+Wd23,14+Wd24,13+Wd34,12W_{d_{12},34}+W_{d_{13},24}+W_{d_{14},23}+W_{d_{23},14}+W_{d_{24},13}+W_{d_{34},12}. The claim is that all the users will get their demanded file from the above placement and delivery phases. For example, user {1,2}\{1,2\} has access to all the subfiles except subfile Wn,34W_{n,34}. However, user {1,2}\{1,2\} can get Wd12,34W_{d_{12},34} by subtracting the remaining subfiles from the transmitted message. Similarly, it can be verified that all the users will get their demanded files. Thus the rate achieved is 1/6. This achieved rate is, in fact, optimal under uncoded placement [31]. Notice that every subfile is cached in two caches. For instance, subfile Wn,12W_{n,12} is cached in cache 1 and cache 2. Thus user {1,2}\{1,2\} who accesses those two caches get multiple copies of the same subfile. This leads to the question of whether we can do better than the MKR scheme.

Now, we employ a coded placement technique and achieve the same rate 1/6 at M=2.5M=2.5. The coding scheme is described hereinafter. The server divides each file into 6 subfiles of equal size, and each subfile is indexed with a set 𝒯\mathcal{T}, where 𝒯[C]\mathcal{T}\subset[C] such that |𝒯|=2|\mathcal{T}|=2. Thus, we have Wn={Wn,12,Wn,13,Wn,14,Wn,23,Wn,24,Wn,34}W_{n}=\{W_{n,12},W_{n,13},W_{n,14},W_{n,23},W_{n,24},W_{n,34}\} for all n[6]n\in[6]. Further, each subfile is divided into 2 mini-subfiles both having half of a subfile size, we have Wn,𝒯={Wn,𝒯(1),Wn,𝒯(2)}W_{n,\mathcal{T}}=\{W_{n,\mathcal{T}}^{(1)},W_{n,\mathcal{T}}^{(2)}\} for all subfiles. The cache placement is summarized in Table II. Each cache contains 5 mini-subfiles (3 uncoded and 2 coded mini-subfiles), each of size 1/121/12 of a file size. Thus, the size of the caches is 2.52.5.

Let Wd𝒰W_{d_{\mathcal{U}}} be the file demanded by user 𝒰\mathcal{U} in the delivery phase. Then the server transmits Wd12,34+Wd13,24+Wd14,23+Wd23,14+Wd24,13+Wd34,12W_{d_{12},34}+W_{d_{13},24}+W_{d_{14},23}+W_{d_{23},14}+W_{d_{24},13}+W_{d_{34},12}. Notice that, this coded message is the same as the transmitted message in the MKR scheme. Thus the rate achieved is also the same as the MKR scheme, which is 1/6. Now, it remains to show that the users can decode their demanded files. Since our scheme follows the same delivery policy as the MKR scheme, it is enough to show that a user in our scheme can decode the subfiles, which are accessible for the corresponding user (user accessing the same set of caches) in the MKR scheme. Consider user 𝒰\mathcal{U} who accesses the cache kk for all k𝒰k\in\mathcal{U}. Consider user {1,2}\{1,2\} accessing cache 1 and cache 2. The user gets Wn,12(1)W_{n,12}^{(1)} from cache 1 and Wn,12(2)W_{n,12}^{(2)} from cache 2, and thus it obtains the subfile Wn,12={Wn,12(1),Wn,12(2)}W_{n,12}=\{W_{n,12}^{(1)},W_{n,12}^{(2)}\} for all n[6]n\in[6]. Using Wn,12(2)W_{n,12}^{(2)}, the user can decode Wn,13(2)W_{n,13}^{(2)} and Wn,14(2)W_{n,14}^{(2)} from cache 1. Similarly, from cache 2, the subfiles Wn,23(2)W_{n,23}^{(2)} and Wn,24(2)W_{n,24}^{(2)} can be decoded using Wn,12(1)W_{n,12}^{(1)}. That means, user {1,2}\{1,2\} obtains the subfiles Wn,12,Wn,13,Wn,14,Wn,23W_{n,12},W_{n,13},W_{n,14},W_{n,23} and Wn,24W_{n,24} for every n[6]n\in[6]. From Table II, it can be verified that a user 𝒰\mathcal{U} gets all the subfile Wn,𝒯W_{n,\mathcal{T}} such that 𝒯𝒰ϕ\mathcal{T}\cap\mathcal{U}\neq\phi. Thus a user can decode the subfiles, which are accessible for the corresponding user in the MKR scheme in the placement phase. In essence, by employing a coded placement technique, we could achieve the same rate, but for a smaller cache memory value.

Next, we show that for the (C=4,r=2,M=3,N=6)(C=4,r=2,M=3,N=6) coded caching scheme, it is possible to meet all the user demands without the server transmission. In other words, for the (C=4,r=2,M=3,N=6)(C=4,r=2,M=3,N=6) coded caching scheme, rate R=0R=0 is achievable. In the placement phase, each file is divided into 2 subfiles of equal size, Wn={Wn,1,Wn,2}W_{n}=\{W_{n,1},W_{n,2}\} for all n[6]n\in[6]. Then encode (Wn,1,Wn,2)(W_{n,1},W_{n,2}) with a [4,2][4,2] MDS code generator matrix 𝐆2×4\mathbf{G}_{2\times 4}. Thus, we have the coded subfiles

(Cn,1,Cn,2,Cn,3,Cn,4)=(Wn,1,Wn,2)𝐆2×4.(C_{n,1},C_{n,2},C_{n,3},C_{n,4})=(W_{n,1},W_{n,2})\mathbf{G}_{2\times 4}.

Then the server fills the caches as follows: Z1={Cn,1;n[6]}Z_{1}=\{C_{n,1};n\in[6]\}, Z2={Cn,2;n[6]}Z_{2}=\{C_{n,2};n\in[6]\}, Z3={Cn,3;n[6]}Z_{3}=\{C_{n,3};n\in[6]\}, and Z4={Cn,4;n[6]}Z_{4}=\{C_{n,4};n\in[6]\}. From this placement, one user gets two coded subfiles from the caches (one from each cache). Since, any two columns of matrix 𝐆\mathbf{G} are independent, users can decode all the files, without further transmissions.

IV Main Results

In this section, we present two achievability results for the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme. Further, we derive an information-theoretic lower bound on the optimal rate-memory trade-off, R(M)R^{*}(M).

Theorem 1.

Let t[Cr+1]t\in[C-r+1]. Then, for the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme, the rate (Ct+r)/(Ct)\binom{C}{t+r}/\binom{C}{t} is achievable at cache memory

M=N(tC1(Ct)i=1r~1r~ir(rr~i+1)(Crtr~+i1))M=N\left(\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{\tilde{r}-1}\frac{\tilde{r}-i}{r}\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+i-1}\right)\\ (6)

where r~=min(r,t)\tilde{r}=\min(r,t).

Proof of Theorem 1 is given in Section V-A. \blacksquare

In Section V-A, for the (C,r,M,N)(C,r,M,N) combinatorial MACC problem, we explicitly present a coding scheme making use of coded placement. As we have seen in the example in Section III-D, coding in the placement phase helps to avoid redundancy in the cached contents (accessed by a user) in the MKR scheme. The example shows that by storing fewer coded subfiles in the placement phase, a user can decode the same subfiles as the corresponding user in the MKR scheme (by corresponding user, we mean that the user accessing the same set of caches) gets in the uncoded form. In other words, it is possible to achieve the same rate as the MKR scheme by using a lesser value of cache memory by employing coding in the placement. For a cache memory value MM as defined in (6) corresponds to a t[Cr]t\in[C-r], we can modify the rate expression as (proof is given in Appendix B)

R(M)=(Ct+r)(Ct)=(Cr)(1rMN)(t+rr).R(M)=\frac{\binom{C}{t+r}}{\binom{C}{t}}=\frac{\binom{C}{r}(1-\frac{rM}{N})}{\binom{t+r}{r}}. (7)

Also, the parameter t=Cr+1t=C-r+1 corresponds to M=N/rM=N/r (proof is provided in Appendix C) and rate, R(M=N/r)=0R(M=N/r)=0. Therefore, the rate expression (7) is valid for every t[Cr+1]t\in[C-r+1]. For a general 0MN/r0\leq M\leq N/r, a lower convex envelope of these points is achievable via memory sharing technique. In the rate expression, the term (1rM/N)(1-rM/N) shows that a user can access or decode rM/NrM/N fraction of every file from the caches that it accesses and requires only the remaining portion of the required file from the delivery phase. Further, the denominator (t+rr)\binom{t+r}{r} is the number of users simultaneously served by a coded transmission in the delivery phase.

The parameter tt in the MKR scheme represents the number of times the entire server library is duplicated among the caches. In the MKR scheme, user 𝒰\mathcal{U} gets |𝒯𝒰||\mathcal{T}\cap\mathcal{U}| copies of a subfile Wn,𝒯W_{n,\mathcal{T}}. Therefore, a user gets a different number of copies of different subfiles depending upon the cardinality of the intersection between the user index set and the subfile index set. Our objective is to design a coded placement phase such that from the accessible cache contents, a user should be able to decode the subfiles that the corresponding user in the MKR scheme gets in the uncoded form. The main challenge of designing such a placement phase is that a user should be able to decode all the subfiles with a non-zero intersection between the user index set and the subfile index set. To tackle this challenge, we employ a two-level MDS encoding. First, we encode the subfiles with an MDS code generator matrix. Then, in each round of the placement phase, a few selected coded subfiles are encoded again with another MDS code generator matrix. This two-level encoding technique is different from the MDS code-based coded placement strategies followed in the schemes in [34, 35] for two-hop networks and multi-server networks, respectively. In both cases, the entire file library was encoded with an MDS code. In our case, user 𝒰\mathcal{U}, 𝒰[C],|𝒰|=r\mathcal{U}\subseteq[C],|\mathcal{U}|=r should be able to decode all the subfiles Wn,𝒯W_{n,\mathcal{T}} such that 𝒯𝒰ϕ\mathcal{T}\cap\mathcal{U}\neq\phi, from the accessible cache contents. To ensure that, in general, a single-level encoding is not sufficient. Our first-level MDS encoding duplicates the file library, whereas our second-level MDS encoding ensures that the users can decode the required subfiles from the cache contents.

Further, note that the parameter r~=min(r,t)\tilde{r}=\min(r,t) is the maximum value of |𝒯𝒰||\mathcal{T}\cap\mathcal{U}| for any 𝒯\mathcal{T} and 𝒰\mathcal{U}. In other words, r~\tilde{r} is the maximum number of copies of a single subfile that is accessed by a user in the MKR scheme. Therefore, our placement phase depends upon r~\tilde{r}. In fact, the placement phase consists of r~\tilde{r} rounds. i.e, Round 0, Round 1, \dots, Round r~1\tilde{r}-1. The transmissions in our delivery phase is the same as that of the MKR scheme. Thus the coded placement phase should enable user 𝒰\mathcal{U} to decode all the subfiles Wn,𝒯W_{n,\mathcal{T}} with 𝒯𝒰ϕ\mathcal{T}\cap\mathcal{U}\neq\phi from the content stored in the accessible caches. Let us denote the content stored in cache cc in Round bb as ZcbZ_{c}^{b}, where c[C]c\in[C] and b{0}[r~1]b\in\{0\}\cup[\tilde{r}-1]. Then, we design our placement phase such that user 𝒰\mathcal{U} be able to decode a subfile Wn,𝒯W_{n,\mathcal{T}} with |𝒯𝒰|=r~β|\mathcal{T}\cap\mathcal{U}|=\tilde{r}-\beta using Zc0,Zc1,,ZcβZ_{c}^{0},Z_{c}^{1},\dots,Z_{c}^{\beta} for every c𝒰c\in\mathcal{U}. For instance, using Zc0Z_{c}^{0} alone (for every c𝒰c\in\mathcal{U}), the user can decode Wn,𝒯W_{n,\mathcal{T}} if |𝒯𝒰|=r~|\mathcal{T}\cap\mathcal{U}|=\tilde{r}. In further decoding, the user also uses the already decoded subfiles. Thus the cache content decoding happens sequentially. First, the user decodes the subfiles Wn,𝒯W_{n,\mathcal{T}} with |𝒯𝒰|=r~|\mathcal{T}\cap\mathcal{U}|=\tilde{r}, then the subfiles Wn,𝒯W_{n,\mathcal{T}} with |𝒯𝒰|=r~1|\mathcal{T}\cap\mathcal{U}|=\tilde{r}-1 and so on. Finally, the user can decode the subfiles Wn,𝒯W_{n,\mathcal{T}} with |𝒯𝒰|=1|\mathcal{T}\cap\mathcal{U}|=1.

When t=1t=1, there is no duplication in cached contents, and thus the contents accessed by a user from different caches are distinct. Therefore, performance improvement by coding is available when t>1t>1. Further, the placement phase of the MKR scheme is independent of rr, whereas our scheme takes rr into consideration during the cache placement. Also, our scheme needs to divide each file into r~!(Ct)\tilde{r}!\binom{C}{t}, whereas the MKR scheme needs a lesser subpacketization of (Ct)\binom{C}{t}.

The MKR scheme achieves the rate (Ct+r)/(Ct)\binom{C}{t+r}/\binom{C}{t} at M=Nt/CM=Nt/C, which is optimal under uncoded placement. Since i=1r~1r~ir(rr~i+1)(Crtr~+i1)\sum\limits_{i=1}^{\tilde{r}-1}\frac{\tilde{r}-i}{r}\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+i-1} is always positive for t>1t>1, we achieve the same rate (Ct+r)/(Ct)\binom{C}{t+r}/\binom{C}{t} at M<Nt/CM<Nt/C. This advantage is enabled with the help of coding in the placement phase. If t=1t=1, we have cache memory M=N/CM=N/C. For 0MN/C0\leq M\leq N/C, the scheme in Theorem 1 has the same rate-memory trade-off as the MKR scheme. For every M>N/CM>N/C, our proposed scheme performs strictly better than the MKR scheme in terms of the rate achieved.

Remark 1.

To quantify the improvement brought in by Theorem 1 compared to the MKR scheme, we consider two memory points: a) corresponding to t=2t=2 (a lower cache memory value), and b) corresponding to t=Crt=C-r (a higher cache memory value).
a) The parameter t=2t=2 corresponds to M=NC(2r1C1)M=\frac{N}{C}(2-\frac{r-1}{C-1}). The corresponding rate from Theorem 1 is R(M)=(Cr+2)(C2)R(M)=\frac{\binom{C}{r+2}}{\binom{C}{2}}. But, the MKR scheme achieves the rate Runcoded(M)=(1+r(C+1)(r1)2(Cr1)(C1))(Cr+2)(C2)R_{\text{uncoded}}^{*}(M)=(1+\frac{r(C+1)(r-1)}{2(C-r-1)(C-1)})\frac{\binom{C}{r+2}}{\binom{C}{2}}. Therefore, at M=NC(2r1C1)M=\frac{N}{C}(2-\frac{r-1}{C-1}), we achieve a rate reduction by a multiplicative factor

Runcoded(M)R(M)=1+r(C+1)(r1)2(Cr1)(C1).\frac{R_{\text{uncoded}}^{*}(M)}{R(M)}=1+\frac{r(C+1)(r-1)}{2(C-r-1)(C-1)}.

For a fixed CC, this multiplicative factor increases as the access degree rr increases.
b) The parameter t=Crt=C-r corresponds to MN=1r1r(Cr)\frac{M}{N}=\frac{1}{r}-\frac{1}{r\binom{C}{r}}. We compare the rate in Theorem 1 with a benchmark scheme obtained by combining the rate-memory trade-off of the MKR scheme and the memory-rate pair (Cr,0)(\frac{C}{r},0) (for the cyclic wrap-around MACC scheme, the achievability of (Cr,0)(\frac{C}{r},0) is shown in [15]). Note that, for the combinatorial MACC scheme, the memory-rate pair (Cr,0)(\frac{C}{r},0) is achievable only with coded placement. For the ease of comparison, we assume that C/rC/r is an integer. Then, we have the rate achieved by the scheme in Theorem 1, R(MN)=1(Cr)R(\frac{M}{N})=\frac{1}{\binom{C}{r}} at MN=1r1r(Cr)\frac{M}{N}=\frac{1}{r}-\frac{1}{r\binom{C}{r}}. At the same M/NM/N, the benchmark scheme achieves a rate Rbenchmark(MN)=Cr(Cr)(CCr+r1)(CCr1)R_{\text{benchmark}}(\frac{M}{N})=\frac{C}{r\binom{C}{r}}\frac{\binom{C}{\frac{C}{r}+r-1}}{\binom{C}{\frac{C}{r}-1}}. Therefore, we have the following multiplicative reduction factor at MN=1r1r(Cr)\frac{M}{N}=\frac{1}{r}-\frac{1}{r\binom{C}{r}},

Rbenchmark(MN)R(MN)=Cr(CCr+r1)(CCr1)\frac{R_{\text{benchmark}}(\frac{M}{N})}{R(\frac{M}{N})}=\frac{C}{r}\frac{\binom{C}{\frac{C}{r}+r-1}}{\binom{C}{\frac{C}{r}-1}}

where Rbenchmark(MN)R(MN)>1\frac{R_{\text{benchmark}}(\frac{M}{N})}{R(\frac{M}{N})}>1 since Cr1<Cr+r1C(Cr1)\frac{C}{r}-1<\frac{C}{r}+r-1\leq C-(\frac{C}{r}-1). That is, in general Rbenchmark(MN)R(MN)Cr1\frac{R_{\text{benchmark}}(\frac{M}{N})}{R(\frac{M}{N})}\approx C^{r-1} at the considered normalized cache memory value.

Next, we present a combinatorial MACC scheme that performs better than the MKR scheme in the lower memory regime. In the following theorem, we present an achievability result for M(N(C1r))/CM\leq(N-\binom{C-1}{r})/C.

Theorem 2.

For the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme with N>(C1r)N>\binom{C-1}{r}, the rate R(M)=NCMR(M)=N-CM is achievable for 0M(N(C1r))/C0\leq M\leq(N-\binom{C-1}{r})/C.

Proof of Theorem 2 is given in Section V-B. \blacksquare

When the number of files NN is such that (C1r)<N(Cr)\binom{C-1}{r}<N\leq\binom{C}{r}, the rate R(M)=NCMR(M)=N-CM is strictly less than the rate of the MKR scheme for 0M(N(C1r))/C0\leq M\leq(N-\binom{C-1}{r})/C.

Remark 2.

The scheme in Theorem 2 achieves the rate R(M)=(C1r)R(M)=\binom{C-1}{r} at M=((Cr)(C1r))/CM=(\binom{C}{r}-\binom{C-1}{r})/C, if N=(Cr)N=\binom{C}{r}. At the same time the optimal rate under uncoded placement is Runcoded(M)=(C1r)(1+rC(r+1))R^{*}_{\text{uncoded}}(M)=\binom{C-1}{r}(1+\frac{r}{C(r+1)}). Therefore, when N=(Cr)N=\binom{C}{r}, we have

RuncodedR(M)=1+rC(r+1)\frac{R^{*}_{\text{uncoded}}}{R(M)}=1+\frac{r}{C(r+1)}

at (N(C1r))/C(N-\binom{C-1}{r})/C. That is, the rate reduction obtained from coded placement diminishes as the number of caches in the system increases.

Now, we present a lower bound on the optimal rate-memory trade-off for the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme.

Theorem 3.

For the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme

R(M)maxs{r,r+1,r+2,,C}{1,2,,N/(sr)}1\displaystyle R^{*}(M)\geq\max_{\begin{subarray}{c}s\in\{r,r+1,r+2,\ldots,C\}\\ \ell\in\left\{1,2,\ldots,\lceil N/\binom{s}{r}\rceil\right\}\end{subarray}}\frac{1}{\ell} {Nωs,s+ωs,(N(sr))+\displaystyle\Big{\{}N-\frac{\omega_{s,\ell}}{s+\omega_{s,\ell}}\Big{(}N-\ell\binom{s}{r}\Big{)}^{+}
(N(Cr))+sM}\displaystyle-\Big{(}N-\ell\binom{C}{r}\Big{)}^{+}-sM\Big{\}} (8)

where ωs,=min(Cs,mini(s+ir)N)\omega_{s,\ell}=\min(C-s,\min\limits_{i}\binom{s+i}{r}\geq\lceil\frac{N}{\ell}\rceil).

Proof of Theorem 3 is given in Section V-C. \blacksquare

The lower bound in Theorem 3 has two parameters: ss, which is related to the number of caches, and \ell, which is associated with the number of transmissions. To obtain this lower bound, we consider s[r:C]s\in[r:C] caches and (sr)\binom{s}{r} users who access caches only from those ss caches. From N/(sr)\lceil{N}/{\binom{s}{r}}\rceil number of transmissions, all the NN files can be decoded at the (sr)\binom{s}{r} users’ end by appropriately choosing the demand vectors. Further, we split the total N/(sr)\lceil{N}/{\binom{s}{r}}\rceil transmissions into \ell transmissions and the remaining N/(sr)\lceil{N}/{\binom{s}{r}}\rceil-\ell transmissions. Then, we bound the uncertainty in those two cases separately. This bounding technique is similar to the approach used in [10] and [24], where lower bounds were derived for the dedicated cache scheme and the MACC scheme with cyclic wrap-around, respectively.

In Fig. 3, we plot the rate-memory trade-off in Theorem 1 along with that of the MKR scheme. For 0M70\leq M\leq 7 (t=1t=1 gives M=7M=7), both the schemes have the same rate-memory trade-off. However, for M>7M>7, our scheme has a strictly lower rate compared to the MKR scheme. It is worth noting that, in order to achieve R(M)=0R(M)=0, it is sufficient to have M=N/rM=N/r, whereas, in the optimal scheme with uncoded placement, M=N(Cr+1)/CM=N(C-r+1)/C is required to achieve R(M)=0R(M)=0. In Fig. 4, we compare the performance of the (5,3,M,10)(5,3,M,10) combinatorial MACC scheme under uncoded and coded placements. The rate-memory trade-off of the optimal scheme under uncoded placement (MKR scheme) is denoted as Runcoded(M)R^{*}_{uncoded}(M), whereas Rcoded(M)R_{coded}(M) is obtained from Theorem 1 and Theorem 2, where the placement is coded. For the (4,2,M,6)(4,2,M,6) combinatorial MACC scheme, the improvement in rate using coded placement can be observed from Fig. 5. Also, notice that the rate-memory trade-off in Theorem 1, Theorem 2 are optimal when M2.5M\geq 2.5, and M0.75M\leq 0.75, respectively.

Refer to caption
Figure 3: (8,3,M,56)(8,3,M,56) Combinatorial MACC scheme. A comparison between the MKR scheme and Scheme 1 (M6M\geq 6)
Refer to caption
Figure 4: (5,3,M,10)(5,3,M,10) Combinatorial MACC scheme: A comparison between uncoded placement and coded placement
Refer to caption
Figure 5: (4,2,M,6)(4,2,M,6) Combinatorial MACC scheme: The MKR scheme, Scheme 1, Scheme 2 and the lower bound

Using the lower bound in Theorem 3, we show the following optimality results. First, we prove that the rate-memory trade-off in Theorem 1 is optimal at a higher memory regime.

Theorem 4.

For the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme

R(M)=1rMNR^{*}(M)=1-\frac{rM}{N} (9)

for (Cr)1r(Cr)MN1r\frac{\binom{C}{r}-1}{r\binom{C}{r}}\leq\frac{M}{N}\leq\frac{1}{r}.

Proof of Theorem 4 is given in Section V-D. \blacksquare

The optimality is shown towards the higher memory regime. When (Cr)1r(Cr)MN1r\frac{\binom{C}{r}-1}{r\binom{C}{r}}\leq\frac{M}{N}\leq\frac{1}{r}, the users would be requiring only a single subfile from the delivery phase and the server can meet that by a single coded transmission. Also, it is worth noting that in our proposed scheme (Scheme 1), the delivery phase is optimal under the placement that we followed. Because the users in our proposed scheme and the users in the MKR scheme obtain the same subfiles from the respective placement phases. Thus, if we could design a better delivery phase, the same would apply to the MKR scheme. However, that is impossible since the MKR scheme is optimal for the placement employed. As the dedicated coded caching scheme (introduced in [1]), the optimal rate-memory trade-off of the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme is also open.

In the lower memory regime, when M<N/CM<N/C, the rate-memory trade-off in Theorem 1 is clearly not optimal, since the scheme in Theorem 2 achieves a better rate. Now, we show that the rate-memory trade-off in Theorem 2 is optimal if the number of users is not more than the number of files.

Theorem 5.

For the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme with (C1r)<N(Cr)\binom{C-1}{r}<N\leq\binom{C}{r}, we have

R(M)=NCMR^{*}(M)=N-CM (10)

for 0MN(C1r)C0\leq M\leq\frac{N-\binom{C-1}{r}}{C}.

Proof of Theorem 5 is given in Section V-E. \blacksquare

In the proposed scheme, N(C1r)N-\binom{C-1}{r} coded subfiles are kept in each cache (for M=(N(C1r))/CM=(N-\binom{C-1}{r})/C) in the placement phase. However, the delivery phase is uncoded. That is, the server transmits uncoded subfiles in the delivery phase. A user receives CrC-r subfiles (of the demanded file) directly, and the remaining rr subfiles need to be decoded from the rr accessible caches. A user requires (C1r)\binom{C-1}{r} subfiles from the delivery phase to decode a subfile of the demanded file from an accessible cache. That is, using N(C1r)N-\binom{C-1}{r} coded subfiles in a cache and (C1r)\binom{C-1}{r} uncoded subfiles from the delivery phase, a user can decode NN subfiles, out of which only one is required for the user. Therefore, the ratio between the number of coded subfiles stored in a cache and the number of subfiles required to decode a subfile from a cache increases with NN. Thus the optimality of the scheme is limited to the case where N(Cr)N\leq\binom{C}{r}.

Next, we show that the rate-memory trade-off of Scheme 1 is within a constant multiplicative factor from the optimal rate-memory trade-off, when r=2r=2 and the number of files with the server is not less than the number of users in the system.

Theorem 6.

For the (C,r=2,M,N)(C,r=2,M,N) combinatorial MACC scheme with N(C2)N\geq\binom{C}{2}, we have

R(M)R(M)21\frac{R(M)}{R^{*}(M)}\leq 21 (11)

where R(M)R(M) is the achievable rate as defined in (7).

Proof of Theorem 6 is given in Section V-F. \blacksquare

In order to prove the optimality gap result, we divide the entire memory regime into three regions. For a lower memory regime (for 0M2N/C0\leq M\leq 2N/C), we approximate the rate to R(M=0)=(C2)R(M=0)=\binom{C}{2}. In the second regime, where 2N/CM0.1N2N/C\leq M\leq 0.1N, we approximate the rate to (N/M)2(N/M)^{2} (note that, R(M)(N/M)2R(M)\leq(N/M)^{2}). Finally, for M0.1NM\geq 0.1N, we approximate the rate R(M)(N/M)2(12M/N)R(M)\approx(N/M)^{2}(1-2M/N). To obtain the optimality gap result, we compute the ratio of these approximate rate-memory trade-off to the lower bound on R(M)R^{*}(M) in Theorem 3. Due to these approximations and the analytical bounding techniques used, the optimality gap of 21 in Theorem 6 is loose. From numerical simulations, for the (C,r=2,M,N)(C,r=2,M,N) combinatorial MACC scheme, we have

R(M)R(M)11.\frac{R(M)}{R^{*}(M)}\leq 11.

Also, the numerical simulations suggested that the multiplicative gap between the rate-memory trade-off given by Scheme 1 and the lower bound on R(M)R^{*}(M) given in Theorem 3 increases as rr increases. However, providing an analytical result on the optimality gap becomes increasingly hard with increasing rr because of the optimization problem in the lower bound expression (8).

Remark 3.

To prove the optimality gap result for the combinatorial MACC scheme with r=2r=2, we divide the entire parameter regime into 3 cases and further into several sub-cases. For a general rr, due to the optimization problem in the lower bound (8) and the presence of two variables in the rate expression, namely CC and rr, other than MM and NN make an analytical derivation of gap result increasingly hard. Also, numerical simulations suggest that the multiplicative factor (gap) between R(M)R(M) given by Scheme 1 and the lower bound on R(M)R^{*}(M) given in Theorem 3 increases as rr increases. Additionally, for the (C,rC/2,M,N(Cr))(C,r\leq C/2,M,N\geq\binom{C}{r}) combinatorial MACC scheme with uN/CMvN/ruN/C\leq M\leq vN/r, for some u1u\geq 1 and v<1v<1, we have

R(M)R(M)czrr!\frac{R(M)}{R^{*}(M)}\leq cz^{r}r!

where c,z>1c,z>1 are constants (proof is given in Appendix D). Similarly, for MvN/rM\geq vN/r, we have shown in Appendix D that

R(M)R(M)(rv)r.\frac{R(M)}{R^{*}(M)}\leq\left(\frac{r}{v}\right)^{r}.

Numerical simulations suggested that the multiplicative gap of 11 for r=2r=2 increases to 15 and 55 as rr increases to 3 and 5, respectively. This indicates that the order provided by the analysis in Appendix D is loose. This is because of the fact that for a general rr, we are unable to optimize the lower bound over the two variables, ss and \ell. However, the gap results indicate a possibility of the existence of either an achievable scheme with an additional gain that scales with rr or a better lower bound, which is higher than the lower bound in Theorem 3, approximately by a factor r!r!. In addition to that, if a better scheme exists, then that implies that a coded caching scheme with coded placement can perform better than the optimal scheme with uncoded placement by a multiplicative factor, which is not a constant and scales with rr.

V Proofs

V-A Proof of Theorem 1

In this section, we present a coding scheme (we refer to this scheme as Scheme 1) that achieves the rate (Ct+r)/(Ct)\binom{C}{t+r}/\binom{C}{t} at cache memory MM (corresponding to every t[Cr+1]t\in[C-r+1]) as given in (6). First, let us consider the case t[Cr]t\in[C-r] (for t=Cr+1t=C-r+1, we present a separate coding scheme at the end of this proof).

a) Placement phase: The server divides each file into (Ct)\binom{C}{t} non-overlapping subfiles of equal size. Each subfile is indexed with a tt-sized set 𝒯[C]\mathcal{T}\subseteq[C]. We assume 𝒯\mathcal{T} to be an ordered set. i.e., 𝒯={τ1,τ2,,τi,,τt}\mathcal{T}=\{\tau_{1},\tau_{2},\dots,\tau_{i},\dots,\tau_{t}\} with τ1<τ2<<τt\tau_{1}<\tau_{2}<\dots<\tau_{t}. Also, whenever an ordering among the subfiles -indexed with tt sized subsets of [C][C]- is required, lexicographic ordering is followed. We have the file Wn={Wn,𝒯:𝒯[C],|𝒯|=t}W_{n}=\{W_{n,\mathcal{T}}:\mathcal{T}\subseteq[C],|\mathcal{T}|=t\} for every n[N]n\in[N]. Let us define r~min(r,t)\tilde{r}\triangleq\min(r,t). Each subfile is further divided into r~!\tilde{r}! mini-subfiles. The subfile Wn,𝒯W_{n,\mathcal{T}} is divided as follows:

Wn,𝒯={Wn,𝒯j:j[r~!]}.W_{n,\mathcal{T}}=\{W_{n,\mathcal{T}}^{j}:j\in[\tilde{r}!]\}.

Let 𝐆\mathbf{G} be a generator matrix of an [r~!t,r~!][\tilde{r}!t,\tilde{r}!] MDS code. For every n[N]n\in[N] and for every 𝒯[C]\mathcal{T}\subseteq[C] such that |𝒯|=t|\mathcal{T}|=t, the server encodes the mini-subfiles (Wn,𝒯j:j[r~!])(W_{n,\mathcal{T}}^{j}:j\in[\tilde{r}!]) with 𝐆\mathbf{G} and obtains the coded mini-subfiles

(Yn,𝒯1,Yn,𝒯2,,Yn,𝒯r~!t)=(Wn,𝒯j:j[r~!])𝐆.(Y_{n,\mathcal{T}}^{1},Y_{n,\mathcal{T}}^{2},\dots,Y_{n,\mathcal{T}}^{\tilde{r}!t})=(W_{n,\mathcal{T}}^{j}:j\in[\tilde{r}!])\mathbf{G}. (12)

Now, for an integer c[C]c\in[C], and a set 𝒯[C]\mathcal{T}\subseteq[C], |𝒯|=t|\mathcal{T}|=t such that 𝒯c\mathcal{T}\ni c, we define a function ϕct:{𝒯[C]:𝒯c,|𝒯|=t}[t]\phi_{c}^{t}:\{\mathcal{T}\subseteq[C]:\mathcal{T}\ni c,|\mathcal{T}|=t\}\rightarrow[t]. The function ϕct(𝒯)\phi_{c}^{t}(\mathcal{T}) gives the position of cc in the ordered set 𝒯\mathcal{T} of size tt. If 𝒯={τ1,τ2,,τi=c,,τt}\mathcal{T}=\{\tau_{1},\tau_{2},\dots,\tau_{i}=c,\dots,\tau_{t}\}, then ϕct(𝒯)=i\phi_{c}^{t}(\mathcal{T})=i.

The placement phase consists of r~\tilde{r} rounds- Round 0, Round 1, \dots, Round r~1\tilde{r}-1. The content stored in cache cc in Round bb is denoted as ZcbZ_{c}^{b}, where b{0}[r~1]b\in\{0\}\cup[\tilde{r}-1]. In Round 0, the server fills cache cc as

Zc0={Yn,𝒯q𝒯0+0:0[(r~1)!],\displaystyle Z_{c}^{0}=\Big{\{}Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{0}+\ell_{0}}:\ell_{0}\in[(\tilde{r}-1)!],
𝒯[C]:𝒯c,|𝒯|=t,n[N]}\displaystyle\hskip 71.13188pt\mathcal{T}\subseteq[C]:\mathcal{T}\ni c,|\mathcal{T}|=t,n\in[N]\Big{\}}

where q𝒯0=(ϕct(𝒯)1)(r~1)!q_{\mathcal{T}}^{0}=(\phi_{c}^{t}(\mathcal{T})-1)(\tilde{r}-1)!. Notice that (r~1)!(C1t1)(\tilde{r}-1)!\binom{C-1}{t-1} coded mini-subfiles (of all the files) are placed in all the caches in Round 0.

Now let us focus on the cache placement in Round bb, where b[r~1]b\in[\tilde{r}-1]. In this round, the server further encodes certain coded mini-subfiles using an MDS code generator matrix. Let 𝐆(b)\mathbf{G}^{(b)} be a systematic generator matrix of a [2(C1t1)i=1b(r1r~i)(Crtr~+i1),(C1t1)][2\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1},\binom{C-1}{t-1}] MDS code, and let 𝐆(b)=[𝐈(C1t1)|𝐏(C1t1)×(C1t1)i=1b(r1r~i)(Crtr~+i1)(b)]\mathbf{G}^{(b)}=[\mathbf{I}_{\binom{C-1}{t-1}}|\mathbf{P}^{(b)}_{\binom{C-1}{t-1}\times\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}}]. In Round bb, the server picks the coded mini-subfiles Yn,𝒯q𝒯b+bY_{n,\mathcal{T}}^{q_{\mathcal{T}}^{b}+\ell_{b}} for every set 𝒯[C],|𝒯|=t\mathcal{T}\subseteq[C],|\mathcal{T}|=t such that 𝒯c\mathcal{T}\ni c, where q𝒯b=r~!r~bt+(ϕct(𝒯)1)r~!(r~b)(r~b+1)q_{\mathcal{T}}^{b}=\frac{\tilde{r}!}{\tilde{r}-b}t+\left(\phi_{c}^{t}(\mathcal{T})-1\right)\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)} and b[r~!(r~b)(r~b+1)]\ell_{b}\in\left[\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\right]. Those subfiles are encoded with 𝐏(b)\mathbf{P}^{(b)} as follows:

(Qn,cb,1,Qn,cb,2,,Qn,cb,(C1t1)i=1b(r1r~i)(Crtr~+i1))\displaystyle(Q_{n,c}^{\ell_{b},1},Q_{n,c}^{\ell_{b},2},\dots,Q_{n,c}^{\ell_{b},\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}}) =\displaystyle=
(Yn,𝒯q𝒯b+b:𝒯[C],|𝒯|=t,𝒯\displaystyle\Big{(}Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{b}+\ell_{b}}:\mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T} c)𝐏(b),\displaystyle\ni c\Big{)}\mathbf{P}^{(b)},
b[r~!(r~b)(r~b+1)],n[N].\displaystyle\forall\ell_{b}\in\left[\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\right],n\in[N]. (13)

We refer to the newly obtained coded mini-subfiles Qn,cb,jQ_{n,c}^{\ell_{b},j} as doubly-encoded mini-subfiles. In Round bb, the server places the following doubly-encoded mini-subfiles in cache cc,

Zcb=\displaystyle Z_{c}^{b}= {Qn,cb,1,Qn,cb,2,,Qn,cb,(C1t1)i=1b(r1r~i)(Crtr~+i1):\displaystyle\Big{\{}Q_{n,c}^{\ell_{b},1},Q_{n,c}^{\ell_{b},2},\dots,Q_{n,c}^{\ell_{b},\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}}:
b[r~!(r~b)(r~b+1)],n[N]}.\displaystyle\hskip 56.9055pt\ell_{b}\in\left[\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\right],n\in[N]\Big{\}}.

Note that, in Round bb, a total of r~!(r~b)(r~b+1)((C1t1)i=1b(r1r~i)(Crtr~+i1))\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\left(\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}\right) doubly-encoded mini-subfiles of all the files are kept in each cache.

The overall contents stored in cache cc in the placement phase is

Zc=b=0r~1Zcb,c[C].Z_{c}=\bigcup\limits_{b=0}^{\tilde{r}-1}Z_{c}^{b},\hskip 5.69046pt\forall c\in[C].

Each coded mini-subfile has 1r~!(Ct)\frac{1}{\tilde{r}!\binom{C}{t}} of a file-size. Therefore, the normalized cache memory is

MN\displaystyle\frac{M}{N} =(r~1)!(C1t1))r~!(Ct)\displaystyle=\frac{(\tilde{r}-1)!\binom{C-1}{t-1})}{\tilde{r}!\binom{C}{t}}
+b=1r~1r~!(r~b)(r~b+1)((C1t1)i=1b(r1r~i)(Crtr~+i1))r~!(Ct)\displaystyle\hskip 14.22636pt+\frac{\sum\limits_{b=1}^{\tilde{r}-1}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\left(\binom{C-1}{t-1}-\sum\limits_{i=1}^{b}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}\right)}{\tilde{r}!\binom{C}{t}}
=tC1(Ct)i=1r~1r~ir(rr~i+1)(Crtr~+i1).\displaystyle=\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{\tilde{r}-1}\frac{\tilde{r}-i}{r}\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+i-1}. (14)

The calculation of M/NM/N value is elaborated in Appendix A.

b) Delivery phase: Let Wd𝒰W_{d_{\mathcal{U}}} be the file demanded by user 𝒰\mathcal{U}, where 𝒰[C]\mathcal{U}\subseteq[C] and |𝒰|=r|\mathcal{U}|=r. During the delivery phase, the server makes a transmission corresponding to every (t+r)(t+r)-sized subsets of [C][C]. The transmission corresponding to a set 𝒮[C]\mathcal{S}\subseteq[C] such that |𝒮|=t+r|\mathcal{S}|=t+r is

𝒰𝒮|𝒰|=rWd𝒰,𝒮\𝒰.\bigoplus_{\begin{subarray}{c}\mathcal{U}\subseteq\mathcal{S}\\ |\mathcal{U}|=r\end{subarray}}W_{d_{\mathcal{U}},\mathcal{S}\backslash\mathcal{U}}.

Note that, for a given t[C]t\in[C], this delivery phase is the same as the delivery phase in the MKR scheme. There are (Ct+r)\binom{C}{t+r} number of (t+r)(t+r)-sized subsets for [C][C]. The server makes transmission corresponding to each of those subsets. Therefore the number of coded subfiles transmitted in the delivery phase is (Ct+r)\binom{C}{t+r}. Each subfile has 1(Ct)\frac{1}{\binom{C}{t}} of a file-size. Therefore the rate of transmission is (Ct+r)/(Ct){\binom{C}{t+r}}/{\binom{C}{t}}.

Next, we show that by employing the above placement and delivery phases, all the users can decode their demanded files. Note that, all the MDS code generator matrices used for encoding the subfiles are known to the users as well. From the coded subfiles placed in the caches, if a user can decode all the subfiles that can be accessed by a corresponding user in the MKR scheme, then the decodability of the demanded files is guaranteed. This is because the delivery phase of our proposed scheme is the same as the delivery phase in the MKR scheme. Thus, we show that user 𝒰\mathcal{U} can decode the subfiles Wn,𝒯W_{n,\mathcal{T}} such that 𝒰𝒯ϕ\mathcal{U}\cap\mathcal{T}\neq\phi from the coded subfiles stored in the accessible caches. That will ensure the decodability of the demanded files. First, we show that an arbitrary user 𝒰\mathcal{U} can decode Wn,𝒯W_{n,\mathcal{T}} if |𝒯𝒰|=r~|\mathcal{T}\cap\mathcal{U}|=\tilde{r}. Then we show that if the user decodes all the subfiles Wn,𝒯W_{n,\mathcal{T}^{\prime}} such that |𝒯𝒰|>r~β|\mathcal{T}^{\prime}\cap\mathcal{U}|>\tilde{r}-\beta, then the user can decode the subfiles Wn,𝒯W_{n,\mathcal{T}} with |𝒯𝒰|=r~β|\mathcal{T}\cap\mathcal{U}|=\tilde{r}-\beta, where β[r~1]\beta\in[\tilde{r}-1]. By sequentially applying β=1\beta=1 to β=r~1\beta=\tilde{r}-1, we have the required result that the user can decode all the subfiles Wn,𝒯W_{n,\mathcal{T}} with |𝒯𝒰|1|\mathcal{T}\cap\mathcal{U}|\geq 1.

c) Decodability: Consider user 𝒰\mathcal{U} accessing cache cc for every c𝒰c\in\mathcal{U}. Let us define r~min(r,t)\tilde{r}\triangleq\min(r,t). Further, consider a tt-sized set 𝒯[C]\mathcal{T}\subseteq[C] such that |𝒯𝒰|=r~|\mathcal{T}\cap\mathcal{U}|=\tilde{r}. The user has access to the following coded mini-subfiles of the subfile Wn,𝒯W_{n,\mathcal{T}} from the caches:

c𝒯𝒰{Yn,𝒯q𝒯0+0available from Zc0:0[(r~1)!]}.\bigcup_{c\in\mathcal{T}\cap\mathcal{U}}\left\{\underbrace{Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{0}+\ell_{0}}}_{\text{available from }Z_{c}^{0}}:\ell_{0}\in[(\tilde{r}-1)!]\right\}.

Note that the number of coded mini-subfiles of Wn,𝒯W_{n,\mathcal{T}} accessible for the user is

|c𝒯𝒰{Yn,𝒯q𝒯0+0:0[(r~1)!]}|=(r~1)!r~=r~!.\mathrel{\bigg{|}}\bigcup_{c\in\mathcal{T}\cap\mathcal{U}}\left\{Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{0}+\ell_{0}}:\ell_{0}\in[(\tilde{r}-1)!]\right\}\mathrel{\bigg{|}}=(\tilde{r}-1)!\tilde{r}=\tilde{r}!.

Thus user 𝒰\mathcal{U} can decode r~!\tilde{r}! mini-subfiles of Wn,𝒯W_{n,\mathcal{T}} from the r~!\tilde{r}! coded mini-subfiles (from (12)), and completely retrieve the subfile Wn,𝒯W_{n,\mathcal{T}}. That is, user 𝒰\mathcal{U} can decode every subfile Wn,𝒯W_{n,\mathcal{T}} with |𝒯𝒰|=r~|\mathcal{T}\cap\mathcal{U}|=\tilde{r}, using Zc0Z_{c}^{0}, c𝒰c\in\mathcal{U} alone. Also, note that the user gets all the coded mini-subfiles {Yn,𝒯1,Yn,𝒯2,,Yn,𝒯r~!t}\{Y_{n,\mathcal{T}}^{1},Y_{n,\mathcal{T}}^{2},\dots,Y_{n,\mathcal{T}}^{\tilde{r}!t}\} from Wn,𝒯W_{n,\mathcal{T}} (from (12)), since 𝐆\mathbf{G} is known to all the users.

Now, we assume that user 𝒰\mathcal{U} has decoded all the subfiles Wn,𝒯W_{n,\mathcal{T}^{\prime}} with |𝒯𝒰|>r~β|\mathcal{T}^{\prime}\cap\mathcal{U}|>\tilde{r}-\beta, where β[r~1]\beta\in[\tilde{r}-1]. Then, we show that the user can decode the subfiles Wn,𝒯W_{n,\mathcal{T}} with |𝒯𝒰|=r~β|\mathcal{T}\cap\mathcal{U}|=\tilde{r}-\beta. From ZcβZ_{c}^{\beta}, user 𝒰\mathcal{U} has the doubly encoded mini-subfiles {Qn,cβ,1,Qn,cβ,2,,Qn,cβ,(C1t1)i=1β(r1r~i)(Crtr~+i1)}\{Q_{n,c}^{\ell_{\beta},1},Q_{n,c}^{\ell_{\beta},2},\dots,Q_{n,c}^{\ell_{\beta},\binom{C-1}{t-1}-\sum_{i=1}^{\beta}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}}\} for every β[r~!(r~β)(r~β+1)]\ell_{\beta}\in[\frac{\tilde{r}!}{(\tilde{r}-\beta)(\tilde{r}-\beta+1)}]. Since, the user decoded all the subfiles Wn,𝒯W_{n,\mathcal{T}^{\prime}} with |𝒯𝒰|>r~β|\mathcal{T}^{\prime}\cap\mathcal{U}|>\tilde{r}-\beta, the user knows all the coded mini-subfiles in {Yn,𝒯q𝒯β+β:|𝒯𝒰|>r~β}\{Y_{n,\mathcal{T}^{\prime}}^{q_{\mathcal{T}^{\prime}}^{\beta}+\ell_{\beta}}:|\mathcal{T}^{\prime}\cap\mathcal{U}|>\tilde{r}-\beta\}. Note that, for a given c𝒰c\in\mathcal{U}, we have

|{𝒯:𝒯𝒰c,|𝒯𝒰|>r~β}|\displaystyle|\{\mathcal{T}^{\prime}:\mathcal{T}^{\prime}\cap\mathcal{U}\ni c,|\mathcal{T}^{\prime}\cap\mathcal{U}|>\tilde{r}-\beta\}| =\displaystyle=
i=1β(r1r~i)\displaystyle\sum_{i=1}^{\beta}\binom{r-1}{\tilde{r}-i} (Crtr~+i1).\displaystyle\binom{C-r}{t-\tilde{r}+i-1}.

Therefore, using {Yn,𝒯q𝒯β+β:|𝒯𝒰|>r~β}\{Y_{n,\mathcal{T}^{\prime}}^{q_{\mathcal{T}^{\prime}}^{\beta}+\ell_{\beta}}:|\mathcal{T}^{\prime}\cap\mathcal{U}|>\tilde{r}-\beta\} and {Qn,cβ,1,Qn,cβ,2,,Qn,cβ,(C1t1)i=1β(r1r~i)(Crtr~+i1)}\{Q_{n,c}^{\ell_{\beta},1},Q_{n,c}^{\ell_{\beta},2},\dots,Q_{n,c}^{\ell_{\beta},\binom{C-1}{t-1}-\sum_{i=1}^{\beta}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}}\}, the user can decode the coded mini-subfiles {Yn,𝒯q𝒯β+β:|𝒯𝒰|c}\{Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{\beta}+\ell_{\beta}}:|\mathcal{T}\cap\mathcal{U}|\ni c\} (from (13)). Now, user 𝒰\mathcal{U} has the following coded mini-subfiles of the subfile Wn,𝒯W_{n,\mathcal{T}}:

c𝒯𝒰{Yn,𝒯q𝒯0+0,Yn,𝒯q𝒯1+1,,Yn,𝒯q𝒯β+β:0[(r~1)!],\displaystyle\bigcup\limits_{c\in\mathcal{T}\cap\mathcal{U}}\Big{\{}Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{0}+\ell_{0}},Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{1}+\ell_{1}},\dots,Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{\beta}+\ell_{\beta}}:\ell_{0}\in[(\tilde{r}-1)!],
1[(r~2)!],,β[r~!(r~β)(r~β+1)]}\displaystyle\hskip 42.67912pt\ell_{1}\in[(\tilde{r}-2)!],\dots,\ell_{\beta}\in[\frac{\tilde{r}!}{(\tilde{r}-\beta)(\tilde{r}-\beta+1)}]\Big{\}}

where |𝒯𝒰|=r~β|\mathcal{T}\cap\mathcal{U}|=\tilde{r}-\beta. Therefore, the number of coded mini-subfiles of Wn,𝒯W_{n,\mathcal{T}} with the user is

|c𝒯𝒰{Yn,𝒯q𝒯0+0,Yn,𝒯q𝒯1+1,,Yn,𝒯q𝒯β+β:0[(r~1)!],\displaystyle\bigg{|}\bigcup\limits_{c\in\mathcal{T}\cap\mathcal{U}}\Bigg{\{}Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{0}+\ell_{0}},Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{1}+\ell_{1}},\dots,Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{\beta}+\ell_{\beta}}:\ell_{0}\in[(\tilde{r}-1)!],
1[(r~2)!],,β[r~!(r~β)(r~β+1)]}|\displaystyle\hskip 28.45274pt\ell_{1}\in[(\tilde{r}-2)!],\dots,\ell_{\beta}\in[\frac{\tilde{r}!}{(\tilde{r}-\beta)(\tilde{r}-\beta+1)}]\Bigg{\}}\bigg{|}
=((r~1)!+b=1βr~!(r~b)(r~b+1))(r~β).\displaystyle\hskip 28.45274pt=\left((\tilde{r}-1)!+\sum_{b=1}^{\beta}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\right)(\tilde{r}-\beta).

The appropriate choice of q𝒯i,i[β]q_{\mathcal{T}}^{i},i\in[\beta] makes sure that all the coded mini-subfiles are distinct. Now, we have

b=1βr~!(r~b)(r~b+1)\displaystyle\sum_{b=1}^{\beta}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)} =r~!(b=1β1r~b1r~b+1)\displaystyle=\tilde{r}!\left(\sum_{b=1}^{\beta}\frac{1}{\tilde{r}-b}-\frac{1}{\tilde{r}-b+1}\right)
=r~!(1r~β1r~)\displaystyle=\tilde{r}!\left(\frac{1}{\tilde{r}-\beta}-\frac{1}{\tilde{r}}\right)
=(r~1)!βr~β.\displaystyle=(\tilde{r}-1)!\frac{\beta}{\tilde{r}-\beta}.

Therefore, the user has

|c𝒯𝒰{Yn,𝒯q𝒯0+0,Yn,𝒯q𝒯1+1,,Yn,𝒯q𝒯β+β:0[(r~1)!],\displaystyle\bigg{|}\bigcup\limits_{c\in\mathcal{T}\cap\mathcal{U}}\bigg{\{}Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{0}+\ell_{0}},Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{1}+\ell_{1}},\dots,Y_{n,\mathcal{T}}^{q_{\mathcal{T}}^{\beta}+\ell_{\beta}}:\ell_{0}\in[(\tilde{r}-1)!],
1[(r~2)!],,β[r~!(r~β)(r~β+1)]}|\displaystyle\hskip 28.45274pt\ell_{1}\in[(\tilde{r}-2)!],\dots,\ell_{\beta}\in[\frac{\tilde{r}!}{(\tilde{r}-\beta)(\tilde{r}-\beta+1)}]\bigg{\}}\bigg{|}
=((r~1)!+(r~1)!βr~β)(r~β)=r~!\displaystyle\hskip 42.67912pt=\left((\tilde{r}-1)!+(\tilde{r}-1)!\frac{\beta}{\tilde{r}-\beta}\right)(\tilde{r}-\beta)=\tilde{r}!

coded mini-subfiles of Wn,𝒯W_{n,\mathcal{T}}. Thus, the user can retrieve the subfile Wn,𝒯W_{n,\mathcal{T}}. In other words, user 𝒰\mathcal{U} can decode all the subfiles Wn,𝒯W_{n,\mathcal{T}} such that |𝒯𝒰|=r~β|\mathcal{T}\cap\mathcal{U}|=\tilde{r}-\beta. Since, this is true for every β[r~1]\beta\in[\tilde{r}-1], the user gets all the subfiles Wn,𝒯W_{n,\mathcal{T}} such that |𝒯𝒰|ϕ|\mathcal{T}\cap\mathcal{U}|\neq\phi from the placement phase. That means, from the coded subfiles placed in the caches, a user can decode all the subfiles that can be accessed by a corresponding user in the MKR scheme. Thus, the decodability of the demanded files is guaranteed.

Finally, we consider the case t=Cr+1t=C-r+1. The parameter t=Cr+1t=C-r+1 corresponds to M=N/rM=N/r (shown in Appendix C). In that case, it is possible to achieve rate R(N/r)=0R(N/r)=0 by placing the contents by properly employing coding. That is, from the accessible cache contents itself, all the users can decode the entire library of files. The cache placement is as follows. The server divides each file into rr non-overlapping subfiles of equal size. We have, Wn={Wn,1,Wn,2,,Wn,r}W_{n}=\{W_{n,1},W_{n,2},\dots,W_{n,r}\} for all n[N]n\in[N]. Let 𝐆r×C\mathbf{G}_{r\times C} be a generator matrix of a [C,r][C,r] MDS code. For every n[N]n\in[N], the server does the following encoding procedure:

(W~n,1,W~n,2,,W~n,C)=(Wn,1,Wn,2,,Wn,r)𝐆.(\tilde{W}_{n,1},\tilde{W}_{n,2},\dots,\tilde{W}_{n,C})=(W_{n,1},W_{n,2},\dots,W_{n,r})\mathbf{G}.

Then the server fills the caches as

Zc={W~n,c,n[N]}.Z_{c}=\{\tilde{W}_{n,c},\forall n\in[N]\}.

Therefore, user 𝒰\mathcal{U} gets access to ZcZ_{c} for every c𝒰c\in\mathcal{U}. This means, the user has rr distinct coded subfiles W~n,c\tilde{W}_{n,c} for every c𝒰c\in\mathcal{U} and for all n[N]n\in[N]. From these coded subfiles, the user can decode all the files (from any rr coded subfiles, rr subfiles of a file can be decoded). This completes the proof of Theorem 1. \blacksquare

Remark 4.

The proposed coding scheme depends upon min(r,t)\min(r,t). The parameter r~=min(r,t)\tilde{r}=\min(r,t) is the maximum number of copies of a single subfile available -from the placement phase- for a user in the MKR scheme. When t>rt>r, even though a subfile is stored in tt different caches, a user gets only a maximum of rr copies. In our scheme, we ensured that user 𝒰\mathcal{U} should be able to decode a subfile Wn,𝒯W_{n,\mathcal{T}} using the accessible cache contents, if 1|𝒯𝒰|r~1\leq|\mathcal{T}\cap\mathcal{U}|\leq\tilde{r}. The number of rounds and the parameters of the encoding matrices depend upon this r~\tilde{r}. However, the general idea behind the coded placement remains the same irrespective of the value of r~\tilde{r}.

V-B Proof of Theorem 2

In this section, we present a coding scheme (we refer to this scheme as Scheme 2).

First, we show the achievability of the rate R(M)=(C1r)R(M)=\binom{C-1}{r} at M=(N(C1r))/CM=(N-\binom{C-1}{r})/C by presenting a coding scheme.

a) Placement phase: The server divides each file into CC non-overlapping subfiles of equal size. Thus, we have Wn={Wn,1,Wn,2,,Wn,C}W_{n}=\{W_{n,1},W_{n,2},\dots,W_{n,C}\} for all n[N]n\in[N]. Let 𝐆\mathbf{G} be a systematic generator matrix of a [2N(C1r),N][2N-\binom{C-1}{r},N] MDS code. Thus we can write 𝐆=[𝐈N|𝐏N×N(C1r)]\mathbf{G}=[\mathbf{I}_{N}|\mathbf{P}_{N\times N-\binom{C-1}{r}}], where 𝐈N\mathbf{I}_{N} is the identity matrix of size NN. Then the server encodes the subfiles (W1,i,W2,i,,WN,i)(W_{1,i},W_{2,i},\dots,W_{N,i}) using 𝐏N×N(C1r)\mathbf{P}_{N\times N-\binom{C-1}{r}}, which results in N(C1r)N-\binom{C-1}{r} coded subfiles. That is

(Q1,i,Q2,i,,QN(C1r),i)=\displaystyle(Q_{1,i},Q_{2,i},\dots,Q_{N-\binom{C-1}{r},i})=
(W1,i,W2,i,,WN,i)𝐏N×N(C1r),i[C].\displaystyle\hskip 42.67912pt(W_{1,i},W_{2,i},\dots,W_{N,i})\mathbf{P}_{N\times N-\binom{C-1}{r}},\hskip 5.69046pt\forall i\in[C].

Then the server fills the caches as follows:

Zi={Q1,i,Q2,i,,QN(C1r),i},i[C].Z_{i}=\{Q_{1,i},Q_{2,i},\dots,Q_{N-\binom{C-1}{r},i}\},\hskip 5.69046pt\forall i\in[C].

The ithi^{\text{th}} cache contains N(C1r)N-\binom{C-1}{r} linearly independent coded combinations of the subfiles (W1,i,W2,i,,WN,i)(W_{1,i},W_{2,i},\dots,W_{N,i}). The size of a coded subfile is the same as the size of a subfile. Thus, we have the total size of a cache, M=(N(C1r))/CM=(N-\binom{C-1}{r})/C.

b) Delivery phase: Let Wd𝒰W_{d_{\mathcal{U}}} be the file demanded by user 𝒰\mathcal{U} in the delivery phase. Let n(𝐝)n(\mathbf{d}) denote the number of distinct files demanded by the users, i.e., n(𝐝)=|{Wd𝒰:𝒰[C],|𝒰|=r}|n(\mathbf{d})=|\{W_{d_{\mathcal{U}}}:\mathcal{U}\subseteq[C],|\mathcal{U}|=r\}| If n(𝐝)(c1r)n(\mathbf{d})\leq\binom{c-1}{r}, then the server simply broadcasts those n(𝐝)n(\mathbf{d}) files. Now, consider the case where n(𝐝)>(C1r)n(\mathbf{d})>\binom{C-1}{r}. Let ni(𝐝)n_{i}(\mathbf{d}) denote the number of distinct files demanded by the users who do not access the ithi^{\text{th}} cache, i.e., ni(𝐝)=|{Wd𝒰:𝒰[C]\{i},|𝒰|=r}|n_{i}(\mathbf{d})=|\{W_{d_{\mathcal{U}}}:\mathcal{U}\subseteq[C]\backslash\{i\},|\mathcal{U}|=r\}|. Note that, ni(𝐝)(C1r)n_{i}(\mathbf{d})\leq\binom{C-1}{r} for all i[C]i\in[C]. If ni(𝐝)=(C1r)n_{i}(\mathbf{d})=\binom{C-1}{r}, then the server transmits Wd𝒰,iW_{d_{\mathcal{U}},i} for all 𝒰[C]\{i},\mathcal{U}\subseteq[C]\backslash\{i\}, such that |𝒰|=r|\mathcal{U}|=r. That is, the server transmits the ithi^{\text{th}} subfile of the files demanded by the users who are not accessing cache ii. If ni(𝐝)<(C1r)n_{i}(\mathbf{d})<\binom{C-1}{r}, then the server transmits the ithi^{\text{th}} subfile of those distinct ni(𝐝)n_{i}(\mathbf{d}) files (files demanded by the users who do not access cache ii) and the ithi^{\text{th}} subfile of some other (C1r)ni(𝐝)\binom{C-1}{r}-n_{i}(\mathbf{d}) files. The same procedure is done for all i[C]i\in[C].

Corresponding to every i[C]i\in[C], the server transmits (C1r)\binom{C-1}{r} subfiles. Each subfile has 1/C1/C of a file size. Thus the rate achieved is (C1r)\binom{C-1}{r}. Now, the claim is that all the users can decode their demanded files completely. When n(𝐝)(C1r)n(\mathbf{d})\leq\binom{C-1}{r}, the decodability is trivial, as the demanded files are sent as such by the server. Let us consider the case where n(𝐝)>(C1r)n(\mathbf{d})>\binom{C-1}{r}. Consider user 𝒰\mathcal{U} who accesses cache ii for all i𝒰i\in\mathcal{U}. The user directly receives Wd𝒰,jW_{d_{\mathcal{U}},j} for all j[C]\𝒰j\in[C]\backslash\mathcal{U}. Further, user 𝒰\mathcal{U} receives the ithi^{\text{th}} subfile of some (C1r)\binom{C-1}{r} files, for every i𝒰i\in\mathcal{U}. Also, for all i𝒰i\in\mathcal{U}, the user has access to N(C1r)N-\binom{C-1}{r} coded subfiles of the subfiles {W1,i,W2,i,,WN,i}\{W_{1,i},W_{2,i},\dots,W_{N,i}\} from cache ii. Thus the user can decode Wn,iW_{n,i} for all n[N]n\in[N] and i𝒰i\in\mathcal{U}, since any NN columns of 𝐆\mathbf{G} are linearly independent. Therefore, user 𝒰\mathcal{U} gets Wd𝒰,kW_{d_{\mathcal{U}},k} for every k[C]k\in[C]. Hence, all the users can decode their demanded files.

Further, if M=0M=0, the rate R=NR=N is trivially achievable by simply broadcasting all the files. Thus, by memory sharing, the rate R(M)=NCMR(M)=N-CM is achievable for 0M(N(C1r))/C0\leq M\leq(N-\binom{C-1}{r})/C. This completes the proof of Theorem 2. \blacksquare

V-C Proof of Theorem 3

The server has NN files W[1:N]W_{[1:N]}. Assume that the users are arranged in the lexicographic order of the subsets (of [C][C] of size rr) by which they are indexed. That is, {1,2,,r}\{1,2,\dots,r\} will be the first in the order, {1,2,,r1,r+1}\{1,2,\dots,r-1,r+1\} will be the second and so on. Consider the set of first ss caches, where s[r:C]s\in[r:C]. The number of users who access caches only from this set is (sr)\binom{s}{r}. Let us focus on those (sr)\binom{s}{r} users. Consider a demand vector 𝐝1\mathbf{d}_{1} in which those (sr)\binom{s}{r} users request for files W1,W2,,W(sr)W_{1},W_{2},\dots,W_{\binom{s}{r}}. That is, the first user (the user comes first in the lexicographic ordering among those (sr)\binom{s}{r} users) demands W1W_{1}, the second user demands W2W_{2}, and so on. The remaining users request arbitrary files from W[1:N]W_{[1:N]}, which we are not interested in. Corresponding to 𝐝1\mathbf{d}_{1}, the server makes transmission X1X_{1}. From contents in the first ss caches Z[1:s]Z_{[1:s]} and transmission X1X_{1}, the considered (sr)\binom{s}{r} users can collectively decode the first (sr)\binom{s}{r} files (one file at each user). Now, consider a demand vector 𝐝2\mathbf{d}_{2} in which those (sr)\binom{s}{r} users request for files W(sr)+1,W(sr)+2,,W2(sr)W_{\binom{s}{r}+1},W_{\binom{s}{r}+2},\dots,W_{2\binom{s}{r}}, and transmission X2X_{2} corresponding to 𝐝2\mathbf{d}_{2}. From cache contents Z[1:s]Z_{[1:s]} and transmission X2X_{2}, those users can collectively decode the files W[(sr)+1:2(sr)]W_{[\binom{s}{r}+1:2\binom{s}{r}]}. If we consider δ=N(sr)\delta=\Big{\lceil}\frac{N}{\binom{s}{r}}\Big{\rceil} such demand vectors and the corresponding transmissions, all the NN files can be decoded at those (sr)\binom{s}{r} user end. Thus, we have

N\displaystyle N =H(W[1:N])H(Z[1:s],X[1:δ])\displaystyle=H(W_{[1:N]})\leq H(Z_{[1:s]},X_{[1:\delta]}) (15a)
=H(Z[1:s])+H(X[1:δ]|Z[1:s]).\displaystyle=H(Z_{[1:s]})+H(X_{[1:\delta]}|Z_{[1:s]}). (15b)

Consider an integer [1:δ]\ell\in[1:\delta]. We can expand (15b) as,

N\displaystyle N sM+H(X[1:]|Z[1:s])+H(X[+1:δ]|Z[1:s],X[1:])\displaystyle\leq sM+H(X_{[1:\ell]}|Z_{[1:s]})+H(X_{[\ell+1:\delta]}|Z_{[1:s]},X_{[1:\ell]}) (16a)
sM+H(X[1:])+H(X[+1:δ]|Z[1:s],X[1:],W[1:N~])\displaystyle\leq sM+H(X_{[1:\ell]})+H(X_{[\ell+1:\delta]}|Z_{[1:s]},X_{[1:\ell]},W_{[1:\tilde{N}]}) (16b)

where N~=min(N,(sr))\tilde{N}=\min(N,\ell\binom{s}{r}). Using the cache contents Z[1:s]Z_{[1:s]} and transmissions X[1:]X_{[1:\ell]}, the files W[1:N~]W_{[1:\tilde{N}]} can be decoded, hence (16b) follows. Let us define, ωs,min(Cs,mini(s+ir)N)\omega_{s,\ell}\triangleq\min(C-s,\min\limits_{i}\binom{s+i}{r}\geq\lceil\frac{N}{\ell}\rceil). We can bound the entropy of \ell transmissions by R(M)\ell R^{*}(M), where each transmission rate is R(M)R^{*}(M). Thus, we have

NsM+R(M)+\displaystyle N\leq sM+\ell R^{*}(M)+
H(X[+1:δ],Z[s+1:s+ωs,]|Z[1:s],X[1:],W[1:N~])\displaystyle\qquad H(X_{[\ell+1:\delta]},Z_{[s+1:s+\omega_{s,\ell}]}|Z_{[1:s]},X_{[1:\ell]},W_{[1:\tilde{N}]}) (17a)
sM+R(M)+H(Z[s+1:s+ωs,]|Z[1:s],X[1:],W[1:N~])μ+\displaystyle\leq sM+\ell R^{*}(M)+\underbrace{H(Z_{[s+1:s+\omega_{s,\ell}]}|Z_{[1:s]},X_{[1:\ell]},W_{[1:\tilde{N}]})}_{\triangleq\mu}+
H(X[+1:δ]|Z[1:s+ωs,],X[1:],W[1:N~])ψ.\displaystyle\hskip 42.67912pt\underbrace{H(X_{[\ell+1:\delta]}|Z_{[1:s+\omega_{s,\ell}]},X_{[1:\ell]},W_{[1:\tilde{N}]})}_{\triangleq\psi}. (17b)

Now, we find an upper bound on μ\mu as follows:

μ\displaystyle\mu =H(Z[s+1:s+ωs,]|Z[1:s],X[1:],W[1:N~])\displaystyle=H(Z_{[s+1:s+\omega_{s,\ell}]}|Z_{[1:s]},X_{[1:\ell]},W_{[1:\tilde{N}]}) (18a)
H(Z[s+1:s+ωs,]|Z[1:s],W[1:N~])\displaystyle\leq H(Z_{[s+1:s+\omega_{s,\ell}]}|Z_{[1:s]},W_{[1:\tilde{N}]}) (18b)
=H(Z[1:s+ωs,]|W[1:N~])H(Z[1:s]|W[1:N~]).\displaystyle=H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})-H(Z_{[1:s]}|W_{[1:\tilde{N}]}). (18c)

By considering any ss caches from [1:s+ωs,][1:s+\omega_{s,\ell}], we can write an inequality similar to the one in (18c).That is,

μH(Z[1:s+ωs,]|W[1:N~])H(Z𝒜|W[1:N~])\mu\leq H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})-H(Z_{\mathcal{A}}|W_{[1:\tilde{N}]}) (19)

where 𝒜[s+ωs,],|𝒜|=s\mathcal{A}\subseteq[s+\omega_{s,\ell}],|\mathcal{A}|=s, and Z𝒜Z_{\mathcal{A}} is the contents stored in the caches indexed by the elements in the set 𝒜\mathcal{A}. That means we can find (s+ωs,s)\binom{s+\omega_{s,\ell}}{s} such inequalities. Averaging over all those inequalities, we get

μ\displaystyle\mu H(Z[1:s+ωs,]|W[1:N~])\displaystyle\leq H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})-
1(s+ωs,s)𝒜[s+ωs,]|𝒜|=sH(Z𝒜|W[1:N~]).\displaystyle\qquad\frac{1}{\binom{s+\omega_{s,\ell}}{s}}\sum_{\begin{subarray}{c}\mathcal{A}\subseteq[s+\omega_{s,\ell}]\\ |\mathcal{A}|=s\end{subarray}}H(Z_{\mathcal{A}}|W_{[1:\tilde{N}]}). (20)

By applying Lemma 1 for the random variables Z[1:s+ωs,]Z_{[1:s+\omega_{s,\ell}]}, we get

1s+ωs,H(Z[1:s+ωs,]|W[1:N~])\displaystyle\frac{1}{s+\omega_{s,\ell}}H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})\leq
1(s+ωs,s)𝒜[s+ωs,]|𝒜|=sH(Z𝒜|W[1:N~])s.\displaystyle\qquad\qquad\frac{1}{\binom{s+\omega_{s,\ell}}{s}}\sum_{\begin{subarray}{c}\mathcal{A}\subseteq[s+\omega_{s,\ell}]\\ |\mathcal{A}|=s\end{subarray}}\frac{H(Z_{\mathcal{A}}|W_{[1:\tilde{N}]})}{s}.

Upon rearranging, we have

1(s+ωs,s)\displaystyle\frac{1}{\binom{s+\omega_{s,\ell}}{s}} 𝒜[s+ωs,]|𝒜|=sH(Z𝒜|W[1:N~])\displaystyle\sum_{\begin{subarray}{c}\mathcal{A}\subseteq[s+\omega_{s,\ell}]\\ |\mathcal{A}|=s\end{subarray}}H(Z_{\mathcal{A}}|W_{[1:\tilde{N}]})\geq
ss+ωs,H(Z[1:s+ωs,]|W[1:N~]).\displaystyle\qquad\qquad\frac{s}{s+\omega_{s,\ell}}H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]}). (21)

Substituting (21) in (20), we get

μ\displaystyle\mu H(Z[1:s+ωs,]|W[1:N~])ss+ωs,H(Z[1:s+ωs,]|W[1:N~])\displaystyle\leq H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})-\frac{s}{s+\omega_{s,\ell}}H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})
=ωs,s+ωs,H(Z[1:s+ωs,]|W[1:N~]).\displaystyle=\frac{\omega_{s,\ell}}{s+\omega_{s,\ell}}H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]}).

Now, consider two cases a) if N~=min(N,(sr))=N\tilde{N}=\min(N,\ell\binom{s}{r})=N, then

H(Z[1:s+ωs,]|W[1:N~])=H(Z[1:s+ωs,]|W[1:N])=0\displaystyle H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})=H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:N]})=0 (23)

and b) if N~=min(N,(sr))=(sr)\tilde{N}=\min(N,\ell\binom{s}{r})=\ell\binom{s}{r}, then

H(\displaystyle H( Z[1:s+ωs,]|W[1:N~])=H(Z[1:s+ωs,]|W[1:(sr)])\displaystyle Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})=H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\ell\binom{s}{r}]}) (24a)
H(Z[1:s+ωs,],W[(sr)+1:N]|W[1:(sr)])\displaystyle\leq H(Z_{[1:s+\omega_{s,\ell}]},W_{[\ell\binom{s}{r}+1:N]}|W_{[1:\ell\binom{s}{r}]}) (24b)
=H(W[(sr)+1:N]|W[1:(sr)])+H(Z[1:s+ωs,]|W[1:N])\displaystyle=H(W_{[\ell\binom{s}{r}+1:N]}|W_{[1:\ell\binom{s}{r}]})+H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:N]}) (24c)
H(W[(sr)+1:N])N(sr)\displaystyle\leq H(W_{[\ell\binom{s}{r}+1:N]})\leq N-\ell\binom{s}{r} (24d)

where (23) and (24d) follow from the fact that the cache contents are the functions of the files. Therefore, we have

H(Z[1:s+ωs,]|W[1:N~])(N(sr))+.\displaystyle H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})\leq\left(N-\ell\binom{s}{r}\right)^{+}. (25)

Therefore, we have the following upper bound on μ\mu:

μωs,s+ωs,(N(sr))+.\displaystyle\mu\leq\frac{\omega_{s,\ell}}{s+\omega_{s,\ell}}\left(N-\ell\binom{s}{r}\right)^{+}. (26)

Now, we find an upper bound on ψ\psi, where ψ=H(X[+1:δ]|Z[1:s+ωs,],X[1:],W[1:N~])\psi=H(X_{[\ell+1:\delta]}|Z_{[1:s+\omega_{s,\ell}]},X_{[1:\ell]},W_{[1:\tilde{N}]}).
We consider two cases a) if N(s+ωs,r)N\leq\ell\binom{s+\omega_{s,\ell}}{r}, then it is possible to decode all the files W[1:N]W_{[1:N]} from Z[1:s+ωs,]Z_{[1:s+\omega_{s,\ell}]} and X[1:]X_{[1:\ell]} by appropriately choosing the demand vectors 𝐝1,𝐝2,,𝐝δ\mathbf{d}_{1},\mathbf{d}_{2},\dots,\mathbf{d}_{\delta}. Then the uncertainty in X[+1:δ]X_{[\ell+1:\delta]} is zero. That is, when N(s+ωs,r)N\leq\ell\binom{s+\omega_{s,\ell}}{r}, we have ψ=0\psi=0.
b) The second case N>(s+ωs,r)N>\ell\binom{s+\omega_{s,\ell}}{r} means that, ωs,=Cs\omega_{s,\ell}=C-s. Note that, ωs,\omega_{s,\ell} is defined such that using the first s+ωs,s+\omega_{s,\ell} caches and \ell transmissions, it is possible to decode the remaining N(sr)N-\ell\binom{s}{r} files by appropriately choosing the demands of the remaining (s+ωs,r)(sr)\binom{s+\omega_{s,\ell}}{r}-\binom{s}{r} users, if N(Cr)N\leq\ell\binom{C}{r}. That is, N>(s+ωs,r)N>\ell\binom{s+\omega_{s,\ell}}{r} means that N>(Cr)N>\ell\binom{C}{r}. Then, we have

ψ\displaystyle\psi =H(X[+1:δ]|Z[1:C],X[1:],W[1:N~])\displaystyle=H(X_{[\ell+1:\delta]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\tilde{N}]}) (27a)
=H(X[+1:δ]|Z[1:C],X[1:],W[1:(Cr)])\displaystyle=H(X_{[\ell+1:\delta]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\ell\binom{C}{r}]}) (27b)
H(X[+1:δ],W[(Cr)+1:N]|Z[1:C],X[1:],W[1:(Cr)])\displaystyle\leq H(X_{[\ell+1:\delta]},W_{[\ell\binom{C}{r}+1:N]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\ell\binom{C}{r}]}) (27c)
H(W[(Cr)+1:N]|Z[1:C],X[1:],W[1:(Cr)])+\displaystyle\leq H(W_{[\ell\binom{C}{r}+1:N]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\ell\binom{C}{r}]})+
H(X[+1:δ]|Z[1:C],X[1:],W[1:N])\displaystyle\qquad\qquad\qquad H(X_{[\ell+1:\delta]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:N]}) (27d)
=H(W[(Cr)+1:N]|Z[1:C],X[1:],W[1:(Cr)])\displaystyle=H(W_{[\ell\binom{C}{r}+1:N]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\ell\binom{C}{r}]}) (27e)
H(W[(Cr)+1:N])=N(Cr).\displaystyle\leq H(W_{[\ell\binom{C}{r}+1:N]})=N-\ell\binom{C}{r}. (27f)

From cache contents Z[1:C]Z_{[1:C]}, and the transmissions X[1:]X_{[1:\ell]} it is possible to decode the files W[1:(Cr)]W_{[1:\ell\binom{C}{r}]}, hence (27b) follows. Further, (27e) follows from the fact that given W[1:N]W_{[1:N]}, there is no uncertainty in the transmissions. Thus, we have the upper bound on ψ\psi,

ψ(N(Cr))+.\displaystyle\psi\leq\left(N-\ell\binom{C}{r}\right)^{+}. (28)

Substituting (26) and (28) in (17b), we get

NsM+R(M)+Xωs,s+ωs,\displaystyle N\leq sM+\ell R^{*}(M)+X\frac{\omega_{s,\ell}}{s+\omega_{s,\ell}} (N(sr))++\displaystyle\left(N-\ell\binom{s}{r}\right)^{+}+
(N(Cr))+.\displaystyle\qquad\left(N-\ell\binom{C}{r}\right)^{+}.

Upon rearranging the terms, and optimizing over all the possible values of ss and \ell, we have the following lower bound on R(M)R^{*}(M)

R(M)\displaystyle R^{*}(M) maxs{r,r+1,r+2,,C}{1,2,,N/(sr)}1{N\displaystyle\geq\max_{\begin{subarray}{c}s\in\{r,r+1,r+2,\ldots,C\}\\ \ell\in\left\{1,2,\ldots,\lceil N/\binom{s}{r}\rceil\right\}\end{subarray}}\frac{1}{\ell}\Big{\{}N-
ωs,s+ωs,(N(sr))+(N(Cr))+sM}\displaystyle\frac{\omega_{s,\ell}}{s+\omega_{s,\ell}}\left(N-\ell\binom{s}{r}\right)^{+}-\left(N-\ell\binom{C}{r}\right)^{+}-sM\Big{\}}

where ωs,=min(Cs,mini(s+ir)N)\omega_{s,\ell}=\min(C-s,\min\limits_{i}\binom{s+i}{r}\geq\lceil\frac{N}{\ell}\rceil). This completes the proof of Theorem 3. \blacksquare

V-D Proof of Theorem 4

First, we show that the rate R(M)=1rMNR(M)=1-\frac{rM}{N} is achievable for (Cr)1r(Cr)MN1r\frac{\binom{C}{r}-1}{r\binom{C}{r}}\leq\frac{M}{N}\leq\frac{1}{r}. Substituting t=Crt=C-r in (7) gives MN=(Cr)1r(Cr)\frac{M}{N}=\frac{\binom{C}{r}-1}{r\binom{C}{r}}, and R=1/(Cr)R=1/\binom{C}{r}. Similarly, t=Cr+1t=C-r+1 gives M/N=1/rM/N=1/r and R=0R=0. By memory sharing, the rate R(M)=1rMNR(M)=1-\frac{rM}{N} is achievable for (Cr)1r(Cr)MN1r\frac{\binom{C}{r}-1}{r\binom{C}{r}}\leq\frac{M}{N}\leq\frac{1}{r}. Now, to show the converse, let us substitute s=rs=r and =N\ell=N in (8). That gives,

R(M)1rMN.R^{*}(M)\geq 1-\frac{rM}{N}.

Therefore, we can conclude that

R(M)=1rMNR^{*}(M)=1-\frac{rM}{N}

for (Cr)1r(Cr)MN1r\frac{\binom{C}{r}-1}{r\binom{C}{r}}\leq\frac{M}{N}\leq\frac{1}{r}. This completes the proof Theorem 4. \blacksquare

V-E Proof of Theorem 5

From Theorem 2, we know that for the (C,r,M,N)(C,r,M,N) combinatorial MACC scheme the rate

R(M)=NCMR(M)=N-CM

is achievable when 0MN(C1r)C0\leq M\leq\frac{N-\binom{C-1}{r}}{C}. By substituting s=Cs=C and =1\ell=1, we get

R(M)NCM.R^{*}(M)\geq N-CM.

Thus, we conclude that

R(M)=NCMR^{*}(M)=N-CM

for 0MN(C1r)C0\leq M\leq\frac{N-\binom{C-1}{r}}{C}. \blacksquare

V-F Proof of Theorem 6

In this section, we show that the rate-memory trade-off of Scheme 1 is within a multiplicative gap of 21 from the lower bound on R(M)R^{*}(M) in Theorem 3, when r=2r=2 and NKN\geq K (i.e., N(C2)N\geq\binom{C}{2}). We consider the cases C6C\leq 6, 7C117\leq C\leq 11, and C12C\geq 12 separately.
Case 1: First assume that, C6C\leq 6. Then, we have

R(M)=(C2)(t+22)(12MN).R(M)=\frac{\binom{C}{2}}{\binom{t+2}{2}}(1-\frac{2M}{N}).

By substituting s=2s=2 and =N\ell=N in (8), we get

R(M)12MN.R^{*}(M)\geq 1-\frac{2M}{N}.

Therefore, we obtain

R(M)R(M)(C2)(t+22)(C2)15.\frac{R(M)}{R^{*}(M)}\leq\frac{\binom{C}{2}}{\binom{t+2}{2}}\leq\binom{C}{2}\leq 15. (29)

Case 2: Now, we consider the case 7C117\leq C\leq 11. For MN/CM\geq N/C (t=1t=1 corresponds to M=N/CM=N/C), we have

R(M)\displaystyle R(M) =(C2)(t+22)(12MN)(112)(1+22)(12MN)\displaystyle=\frac{\binom{C}{2}}{\binom{t+2}{2}}(1-\frac{2M}{N})\leq\frac{\binom{11}{2}}{\binom{1+2}{2}}(1-\frac{2M}{N})
553(12MN).\displaystyle\leq\frac{55}{3}(1-\frac{2M}{N}).

Therefore, we have

R(M)R(M)55318.34\frac{R(M)}{R^{*}(M)}\leq\frac{55}{3}\leq 18.34 (30)

for MN/CM\geq N/C, where 7C117\leq C\leq 11.

Now, we consider the case MN/CM\leq N/C and 7C117\leq C\leq 11. By substituting =N(s2)\ell=\lceil\frac{N}{\binom{s}{2}}\rceil in (8), we get

R(M)NsMN(s2)NsMN(s2)+1(s2)(1sMN)1+(s2)N.R^{*}(M)\geq\frac{N-sM}{\lceil\frac{N}{\binom{s}{2}}\rceil}\geq\frac{N-sM}{\frac{N}{\binom{s}{2}}+1}\geq\frac{\binom{s}{2}(1-\frac{sM}{N})}{1+\frac{\binom{s}{2}}{N}}. (31)

Substituting s=3s=3 in (31) yields

R(M)3(13MN)1+3N.R^{*}(M)\geq\frac{3(1-\frac{3M}{N})}{1+\frac{3}{N}}.

Since C7C\geq 7, we have N(C2)21N\geq\binom{C}{2}\geq 21. Therefore, we get

R(M)218(13MN).R^{*}(M)\geq\frac{21}{8}(1-\frac{3M}{N}).

Therefore, we have

R(M)R(M)55×83×21(12MN)(13MN).\frac{R(M)}{R^{*}(M)}\leq\frac{55\times 8}{3\times 21}\frac{(1-\frac{2M}{N})}{(1-\frac{3M}{N})}.

However, we have 0MN1C170\leq\frac{M}{N}\leq\frac{1}{C}\leq\frac{1}{7}. Further, in that regime, the function (12MN)(13MN)\frac{(1-\frac{2M}{N})}{(1-\frac{3M}{N})} is monotonically increasing in MN\frac{M}{N}, and we have

(12MN)(13MN)1.25.\frac{(1-\frac{2M}{N})}{(1-\frac{3M}{N})}\leq 1.25.

Therefore, we get

R(M)R(M)55×8×1.253×218.74.\frac{R(M)}{R^{*}(M)}\leq\frac{55\times 8\times 1.25}{3\times 21}\leq 8.74. (32)

By combining (30) and (32), we get

R(M)R(M)18.34\frac{R(M)}{R^{*}(M)}\leq 18.34 (33)

when 7C117\leq C\leq 11.

Case 3: Finally, we consider the third case, where C12C\geq 12. Now, we let s=μCs=\lfloor\mu C\rfloor and =νN/(s2)\ell=\lceil\nu N/\binom{s}{2}\rceil, where μ[2/C,1]\mu\in[2/C,1] and ν[0,1]\nu\in[0,1]. Then, we have

R\displaystyle R^{*} (M)1{N(1sC)(N(s2))+\displaystyle(M)\geq\frac{1}{\ell}\bigg{\{}N-(1-\frac{s}{C})\left(N-\ell\binom{s}{2}\right)^{+}-
(N(C2))+sM}\displaystyle\qquad\qquad\qquad\qquad\qquad\left(N-\ell\binom{C}{2}\right)^{+}-sM\bigg{\}}
N(1μCC)(NνN/(s2)(s2))+νN/(μC2)\displaystyle\geq\frac{N-(1-\frac{\lfloor\mu C\rfloor}{C})\left(N-\lceil\nu N/\binom{s}{2}\rceil\binom{s}{2}\right)^{+}}{\lceil\nu N/\binom{\lfloor\mu C\rfloor}{2}\rceil}-
(NνN/(s2)(C2))+μCMνN/(μC2)\displaystyle\qquad\qquad\qquad\frac{\left(N-\lceil\nu N/\binom{s}{2}\rceil\binom{C}{2}\right)^{+}-\lfloor\mu C\rfloor M}{\lceil\nu N/\binom{\lfloor\mu C\rfloor}{2}\rceil}
N(1μC1C)(NνN)+νN/(μC2)+1\displaystyle\geq\frac{N-(1-\frac{\mu C-1}{C})\left(N-\nu N\right)^{+}}{\nu N/\binom{\lfloor\mu C\rfloor}{2}+1}-
(NνN/(s2)(C2))+μCMνN/(μC2)+1\displaystyle\qquad\qquad\qquad\frac{\left(N-\lceil\nu N/\binom{s}{2}\rceil\binom{C}{2}\right)^{+}-\mu CM}{\nu N/\binom{\lfloor\mu C\rfloor}{2}+1}
NN(1μ+1C)(1ν)νN/(μC2)+1\displaystyle\geq\frac{N-N(1-\mu+\frac{1}{C})(1-\nu)}{\nu N/\binom{\lfloor\mu C\rfloor}{2}+1}-
N(1ν(C2)/(s2))+μCMνN/(μC2)+1\displaystyle\qquad\qquad\qquad\frac{N\left(1-\nu\binom{C}{2}/\binom{s}{2}\right)^{+}-\mu CM}{\nu N/\binom{\lfloor\mu C\rfloor}{2}+1}
1(1μ+1C)(1ν)2νμCμC1+1N\displaystyle\geq\frac{1-(1-\mu+\frac{1}{C})(1-\nu)}{\frac{2\nu}{\lfloor\mu C\rfloor\lfloor\mu C-1\rfloor}+\frac{1}{N}}-
(1νC(C1)μCμC1)+μCMN2νμCμC1+1N\displaystyle\qquad\qquad\qquad\frac{\left(1-\nu\frac{C(C-1)}{\lfloor\mu C\rfloor\lfloor\mu C-1\rfloor}\right)^{+}-\mu C\frac{M}{N}}{\frac{2\nu}{\lfloor\mu C\rfloor\lfloor\mu C-1\rfloor}+\frac{1}{N}}
ν+μ(1ν)1νC(1νμ2(C1)C1/μ)+μCMN2ν(μC1)(μC2)+1N.\displaystyle\geq\frac{\nu+\mu(1-\nu)-\frac{1-\nu}{C}-\left(1-\frac{\nu}{\mu^{2}}\frac{(C-1)}{C-1/\mu}\right)^{+}-\mu C\frac{M}{N}}{\frac{2\nu}{(\mu C-1)(\mu C-2)}+\frac{1}{N}}.

We choose a value of ν[0,1]\nu\in[0,1] such that νμ2\nu\geq\mu^{2}. Then, we get (1νμ2(C1)C1/μ)+=0\left(1-\frac{\nu}{\mu^{2}}\frac{(C-1)}{C-1/\mu}\right)^{+}=0. In that case, we have

R(M)\displaystyle R^{*}(M) ν+μ(1ν)1νCμCMN2ν(μC1)(μC2)+1N\displaystyle\geq\frac{\nu+\mu(1-\nu)-\frac{1-\nu}{C}-\mu C\frac{M}{N}}{\frac{2\nu}{(\mu C-1)(\mu C-2)}+\frac{1}{N}}
(C1μ)(C2μ)2ν+μ(1ν)1νCμCMNνμ2+(C1μ)(C2μ)2N.\displaystyle\geq\frac{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}{2}\frac{\nu+\mu(1-\nu)-\frac{1-\nu}{C}-\mu C\frac{M}{N}}{\frac{\nu}{\mu^{2}}+\frac{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}{2N}}. (34)

Now, we assume that 0M1.1N/C0\leq M\leq 1.1N/C. We have the rate R(M)(C2)R(M)\leq\binom{C}{2}. Then, from (34), we obtain

R(M)R(M)\displaystyle\frac{R(M)}{R^{*}(M)} C(C1)(C1μ)(C2μ)νμ2+(C1μ)(C2μ)2Nν+μ(1ν)1νC1.1μ\displaystyle\leq\frac{C(C-1)}{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}\frac{\frac{\nu}{\mu^{2}}+\frac{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}{2N}}{\nu+\mu(1-\nu)-\frac{1-\nu}{C}-1.1\mu}
11C(11μC)(12μC)νμ2+(C1μ)(C2μ)2Nν+μ(1ν)1νC1.1μ.\displaystyle\leq\frac{1-\frac{1}{C}}{(1-\frac{1}{\mu C})(1-\frac{2}{\mu C})}\frac{\frac{\nu}{\mu^{2}}+\frac{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}{2N}}{\nu+\mu(1-\nu)-\frac{1-\nu}{C}-1.1\mu}. (35)

Now, we substitute μ=0.6\mu=0.6 and ν=1\nu=1 in (35). Then, we obtain

R(M)R(M)\displaystyle\frac{R(M)}{R^{*}(M)} 1(1112×0.6)(1212×0.6)1.36+111.1×0.617.87\displaystyle\leq\frac{1}{(1-\frac{1}{12\times 0.6})(1-\frac{2}{12\times 0.6})}\frac{\frac{1}{.36}+1}{1-1.1\times 0.6}\leq 17.87 (36)

since N(C2)(C1/μ)(C2/μ)/2N\geq\binom{C}{2}\geq(C-1/\mu)(C-2/\mu)/2. Next, we consider the case 1.1N/CM1.9N/C1.1N/C\leq M\leq 1.9N/C. Then, we have

R(M)\displaystyle R(M) R(1.1NC)=0.9R(NC)+0.1R(2NC)\displaystyle\leq R(1.1\frac{N}{C})=0.9R(\frac{N}{C})+0.1R(\frac{2N}{C}) (37)
0.9(C3)C+0.1(C4)(C2)\displaystyle\leq 0.9\frac{\binom{C}{3}}{C}+0.1\frac{\binom{C}{4}}{\binom{C}{2}}
=0.9(C1)(C2)6+0.1(C2)(C3)12\displaystyle=0.9\frac{(C-1)(C-2)}{6}+0.1\frac{(C-2)(C-3)}{12}

where (37) follows from memory sharing technique. Using (34), we get

R(M)R(M)\displaystyle\frac{R(M)}{R^{*}(M)} 2R(M)(C1μ)(C2μ)νμ2+(C1μ)(C2μ)2Nν+μ(1ν)1νC1.9μ\displaystyle\leq\frac{2R(M)}{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}\frac{\frac{\nu}{\mu^{2}}+\frac{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}{2N}}{\nu+\mu(1-\nu)-\frac{1-\nu}{C}-1.9\mu}

where

2R(M)(C1μ)(C2μ)0.93(C1)(C2)(C1μ)(C2μ)+\displaystyle\frac{2R(M)}{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}\leq\frac{0.9}{3}\frac{(C-1)(C-2)}{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}+
0.16(C2)(C3)(C1μ)(C2μ)\displaystyle\qquad\qquad\qquad\frac{0.1}{6}\frac{(C-2)(C-3)}{(C-\frac{1}{\mu})(C-\frac{2}{\mu})}
0.93(11C)(12C)(11μC)(12μC)+0.16(12C)(13C)(11μC)(12μC).\displaystyle\qquad\leq\frac{0.9}{3}\frac{(1-\frac{1}{C})(1-\frac{2}{C})}{(1-\frac{1}{\mu C})(1-\frac{2}{\mu C})}+\frac{0.1}{6}\frac{(1-\frac{2}{C})(1-\frac{3}{C})}{(1-\frac{1}{\mu C})(1-\frac{2}{\mu C})}.

Since C12C\geq 12, we have

2R(M)(C1μ)(C2μ)\displaystyle\frac{2R(M)}{(C-\frac{1}{\mu})(C-\frac{2}{\mu})} 0.931(1112μ)(116μ)+\displaystyle\leq\frac{0.9}{3}\frac{1}{(1-\frac{1}{12\mu})(1-\frac{1}{6\mu})}+
0.161(1112μ)(116μ)\displaystyle\qquad\qquad\qquad\frac{0.1}{6}\frac{1}{(1-\frac{1}{12\mu})(1-\frac{1}{6\mu})}
=1.961(1112μ)(116μ).\displaystyle=\frac{1.9}{6}\frac{1}{(1-\frac{1}{12\mu})(1-\frac{1}{6\mu})}.

Now, let μ=0.37\mu=0.37 and ν=0.9969\nu=0.9969. Then, we have

R(M)R(M)\displaystyle\frac{R(M)}{R^{*}(M)} (1.96(1112×0.37)(116×0.37))×\displaystyle\leq\left(\frac{1.9}{6(1-\frac{1}{12\times 0.37})(1-\frac{1}{6\times 0.37})}\right)\times (38)
(0.99690.372+10.9969+0.37×0.00310.0031121.9×0.37)\displaystyle\left(\frac{\frac{0.9969}{0.37^{2}}+1}{0.9969+0.37\times 0.0031-\frac{0.0031}{12}-1.9\times 0.37}\right)
20.895.\displaystyle\leq 20.895. (39)

Next, we consider the memory regime 1.9N/CM0.0712N1.9N/C\leq M\leq 0.0712N. We have

R(M)\displaystyle R(M) =(C2)(t+22)(12MN)C(C1)(t+2)(t+1)\displaystyle=\frac{\binom{C}{2}}{\binom{t+2}{2}}(1-\frac{2M}{N})\leq\frac{C(C-1)}{(t+2)(t+1)}
C(C1)(CMN+2)(CMN+1)(NM)2.\displaystyle\leq\frac{C(C-1)}{(\frac{CM}{N}+2)(\frac{CM}{N}+1)}\leq\left(\frac{N}{M}\right)^{2}. (40)

We now let s=0.6824N/Ms=\lfloor 0.6824N/M\rfloor and =N/(s2)\ell=\lceil N/\binom{s}{2}\rceil. Then, we have the lower bound from (8)

R(M)\displaystyle R^{*}(M) 1{N(1sC)(N(s2))+\displaystyle\geq\frac{1}{\ell}\bigg{\{}N-(1-\frac{s}{C})\left(N-\ell\binom{s}{2}\right)^{+}-
(N(C2))+sM}\displaystyle\qquad\qquad\qquad\qquad\qquad\left(N-\ell\binom{C}{2}\right)^{+}-sM\bigg{\}}
N(10.6824N/MC)(NN/(s2)(s2))+N/(0.6824N/M2)\displaystyle\geq\frac{N-(1-\frac{\lfloor 0.6824N/M\rfloor}{C})\left(N-\lceil N/\binom{s}{2}\rceil\binom{s}{2}\right)^{+}}{\lceil N/\binom{\lfloor 0.6824N/M\rfloor}{2}\rceil}-
(NN/(s2)(C2))+0.6824N/MMN/(0.6824N/M2)\displaystyle\qquad\qquad\frac{\left(N-\lceil N/\binom{s}{2}\rceil\binom{C}{2}\right)^{+}-\lfloor 0.6824N/M\rfloor M}{\lceil N/\binom{\lfloor 0.6824N/M\rfloor}{2}\rceil}
N0.6824NN(0.6824N/M2)+1\displaystyle\geq\frac{N-0.6824N}{\frac{N}{\binom{\lfloor 0.6824N/M\rfloor}{2}}+1}
10.68242(0.6824N/M1)(0.6824N/M2)+1N\displaystyle\geq\frac{1-0.6824}{\frac{2}{(0.6824N/M-1)(0.6824N/M-2)}+\frac{1}{N}}
=(NM)20.31762(0.6824M/N)(0.68242M/N)+(NM)2N.\displaystyle=\left(\frac{N}{M}\right)^{2}\frac{0.3176}{\frac{2}{(0.6824-M/N)(0.6824-2M/N)}+\frac{(\frac{N}{M})^{2}}{N}}.

Therefore, we obtain

R(M)R(M)\displaystyle\frac{R(M)}{R^{*}(M)} 2(0.6824M/N)(0.68242M/N)+(NM)2N0.3176.\displaystyle\leq\frac{\frac{2}{(0.6824-M/N)(0.6824-2M/N)}+\frac{(\frac{N}{M})^{2}}{N}}{0.3176}.

However, we have N/MC/1.9N/M\leq C/1.9 and M/N0.0712M/N\leq 0.0712. Thus

R(M)R(M)\displaystyle\frac{R(M)}{R^{*}(M)} 2(0.6824M/N)(0.68242M/N)+(NM)2N0.3176\displaystyle\leq\frac{\frac{2}{(0.6824-M/N)(0.6824-2M/N)}+\frac{(\frac{N}{M})^{2}}{N}}{0.3176}
2(0.6824.0712)(0.6824.1424)+C23.61N0.3176.\displaystyle\leq\frac{\frac{2}{(0.6824-.0712)(0.6824-.1424)}+\frac{C^{2}}{3.61N}}{0.3176}.

Since N(C2)N\geq\binom{C}{2}, we have

C23.61N\displaystyle\frac{C^{2}}{3.61N} C23.61(C2)=23.61(11C)\displaystyle\leq\frac{C^{2}}{3.61\binom{C}{2}}=\frac{2}{3.61(1-\frac{1}{C})}
23.61(1112)=0.6044.\displaystyle\leq\frac{2}{3.61(1-\frac{1}{12})}=0.6044.

Therefore, we obtain

R(M)R(M)\displaystyle\frac{R(M)}{R^{*}(M)} 2(0.6824.0712)(0.6824.1424)+0.60440.317620.983.\displaystyle\leq\frac{\frac{2}{(0.6824-.0712)(0.6824-.1424)}+0.6044}{0.3176}\leq 20.983. (41)

Now, consider the case M0.0712NM\geq 0.0712N. We have

R(M)\displaystyle R(M) =(C2)(t+22)(12MN)C(C1)(t+2)(t+1)(12MN)\displaystyle=\frac{\binom{C}{2}}{\binom{t+2}{2}}(1-\frac{2M}{N})\leq\frac{C(C-1)}{(t+2)(t+1)}\left(1-\frac{2M}{N}\right)
C(C1)(CMN+2)(CMN+1)(12MN)\displaystyle\leq\frac{C(C-1)}{(\frac{CM}{N}+2)(\frac{CM}{N}+1)}\left(1-\frac{2M}{N}\right)
(NM)2(12MN).\displaystyle\leq\left(\frac{N}{M}\right)^{2}\left(1-\frac{2M}{N}\right).

In (8), substituting =N/(s2)\ell=\lceil N/\binom{s}{2}\rceil gives

R(M)\displaystyle R^{*}(M) NsMN(s2)NsMN(s2)+1=1sMN1(s2)+1N.\displaystyle\geq\frac{N-sM}{\lceil\frac{N}{\binom{s}{2}}\rceil}\geq\frac{N-sM}{\frac{N}{\binom{s}{2}}+1}=\frac{1-\frac{sM}{N}}{\frac{1}{\binom{s}{2}}+\frac{1}{N}}.

Therefore, we get

R(M)R(M)\displaystyle\frac{R(M)}{R^{*}(M)} (1(s2)+1N)(NM)2(12MN)1sMN.\displaystyle\leq\left(\frac{1}{\binom{s}{2}}+\frac{1}{N}\right)\frac{(\frac{N}{M})^{2}(1-\frac{2M}{N})}{1-\frac{sM}{N}}.

Also, we have N(C2)(122)=66N\geq\binom{C}{2}\geq\binom{12}{2}=66. Then, we obtain

R(M)R(M)(1(s2)+166)(NM)2(12MN)1sMN.\frac{R(M)}{R^{*}(M)}\leq\left(\frac{1}{\binom{s}{2}}+\frac{1}{66}\right)\frac{(\frac{N}{M})^{2}(1-\frac{2M}{N})}{1-\frac{sM}{N}}. (42)

Next consider the region 0.0712NM0.1N0.0712N\leq M\leq 0.1N. By substituting s=8s=8 in (42), we get

R(M)R(M)(128+166)(NM)2(12MN)18MN\frac{R(M)}{R^{*}(M)}\leq\left(\frac{1}{28}+\frac{1}{66}\right)\frac{(\frac{N}{M})^{2}(1-\frac{2M}{N})}{1-\frac{8M}{N}}

where (NM)2(12MN)18MN\frac{(\frac{N}{M})^{2}(1-\frac{2M}{N})}{1-\frac{8M}{N}} is a function of MN\frac{M}{N} and has a maximum value 400 in the region 0.0712M/N0.10.0712\leq M/N\leq 0.1. Therefore, we have

R(M)R(M)400(128+166)20.35.\frac{R(M)}{R^{*}(M)}\leq 400\left(\frac{1}{28}+\frac{1}{66}\right)\leq 20.35. (43)

Next consider the region 0.1NM0.16N0.1N\leq M\leq 0.16N. By substituting s=5s=5 in (42), we get

R(M)R(M)(110+166)(NM)2(12MN)15MN\frac{R(M)}{R^{*}(M)}\leq\left(\frac{1}{10}+\frac{1}{66}\right)\frac{(\frac{N}{M})^{2}(1-\frac{2M}{N})}{1-\frac{5M}{N}}

where the function (NM)2(12MN)15MN\frac{(\frac{N}{M})^{2}(1-\frac{2M}{N})}{1-\frac{5M}{N}} has a maximum value 160 in the region 0.1M/N0.160.1\leq M/N\leq 0.16. Therefore, we have

R(M)R(M)160(110+166)18.43.\frac{R(M)}{R^{*}(M)}\leq 160\left(\frac{1}{10}+\frac{1}{66}\right)\leq 18.43. (44)

Next consider the region 0.16NM0.25N0.16N\leq M\leq 0.25N. By substituting s=3s=3 in (42), we get

R(M)R(M)(13+166)(NM)2(12MN)13MN\frac{R(M)}{R^{*}(M)}\leq\left(\frac{1}{3}+\frac{1}{66}\right)\frac{(\frac{N}{M})^{2}(1-\frac{2M}{N})}{1-\frac{3M}{N}}

where the function (NM)2(12MN)13MN\frac{(\frac{N}{M})^{2}(1-\frac{2M}{N})}{1-\frac{3M}{N}} has a maximum value 51.082 in the region 0.16M/N0.250.16\leq M/N\leq 0.25. Therefore, we have

R(M)R(M)51.082(13+166)17.81.\frac{R(M)}{R^{*}(M)}\leq 51.082\left(\frac{1}{3}+\frac{1}{66}\right)\leq 17.81. (45)

Finally, consider the case 0.25NM0.5N0.25N\leq M\leq 0.5N. Substituting s=2s=2 and =N\ell=N in (8) yield

R(M)12MN.R^{*}(M)\geq 1-\frac{2M}{N}.

Also, we have

R(M)(NM)2(12MN).R(M)\leq\left(\frac{N}{M}\right)^{2}\left(1-\frac{2M}{N}\right).

Therefore, we get

R(M)R(M)(NM)216.\frac{R(M)}{R^{*}(M)}\leq\left(\frac{N}{M}\right)^{2}\leq 16. (46)

By combining (36),(39),(41),(43),(44),(45),and (46), we get

R(M)R(M)21\frac{R(M)}{R^{*}(M)}\leq 21 (47)

when C12C\geq 12.

Finally, combining (29), (33), and (47) yields the required gap result

R(M)R(M)21\frac{R(M)}{R^{*}(M)}\leq 21 (48)

for the (C,r=2,M,N(C2))(C,r=2,M,N\geq\binom{C}{2}) combinatorial MACC scheme. \blacksquare

VI Conclusion

In this work, we presented two coding schemes for the combinatorial MACC setting introduced in [30]. Both the presented schemes employ coding in the placement phase in addition to the coded transmissions in the delivery phase. That is, we showed that with the help of a coded placement phase, it is possible to achieve a reduced rate compared to the optimal coded caching scheme under uncoded placement. Finally, we derived an information-theoretic lower bound on the optimal rate-memory trade-off of the combinatorial MACC scheme and showed that the first scheme is optimal at a higher memory regime and the second scheme is optimal when the number of files with the server is no more than the number of users in the system.

Appendix A

In this section, we calculate the normalized cache memory as a continuation of (14). From (14), we have

MN\displaystyle\frac{M}{N} =(r~1)!(C1t1))r~!(Ct)+\displaystyle=\frac{(\tilde{r}-1)!\binom{C-1}{t-1})}{\tilde{r}!\binom{C}{t}}+
b=1r~1r~!(r~b)(r~b+1)((C1t1)i=1b(r1r~i)(Crtr~+i1))r~!(Ct)\displaystyle\qquad\frac{\sum\limits_{b=1}^{\tilde{r}-1}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\left(\binom{C-1}{t-1}-\sum\limits_{i=1}^{b}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}\right)}{\tilde{r}!\binom{C}{t}}
=(C1t1)((r~1)!+b=1r~1r~!(r~b)(r~b+1))r~!(Ct)\displaystyle=\frac{\binom{C-1}{t-1}\left((\tilde{r}-1)!+\sum\limits_{b=1}^{\tilde{r}-1}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\right)}{\tilde{r}!\binom{C}{t}}-
b=1r~1r~!(r~b)(r~b+1)i=1b(r1r~i)(Crtr~+i1)r~!(Ct).\displaystyle\qquad\qquad\frac{\sum\limits_{b=1}^{\tilde{r}-1}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\sum\limits_{i=1}^{b}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}}{\tilde{r}!\binom{C}{t}}.

By expanding and cancelling the terms, we get

b=1r~1r~!(r~b)(r~b+1)\displaystyle\sum_{b=1}^{\tilde{r}-1}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)} =r~!b=1r~1(1r~b1r~b+1)\displaystyle=\tilde{r}!\sum_{b=1}^{\tilde{r}-1}\left(\frac{1}{\tilde{r}-b}-\frac{1}{\tilde{r}-b+1}\right)
=r~!(11r~)=(r~1)!(r~1).\displaystyle=\tilde{r}!\left(1-\frac{1}{\tilde{r}}\right)=(\tilde{r}-1)!(\tilde{r}-1).

Thus, we have

MN\displaystyle\frac{M}{N} =r~!(C1t1)b=1r~1r~!(r~b)(r~b+1)i=1b(r1r~i)(Crtr~+i1)r~!(Ct).\displaystyle=\frac{\tilde{r}!\binom{C-1}{t-1}-\sum\limits_{b=1}^{\tilde{r}-1}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}\sum\limits_{i=1}^{b}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}}{\tilde{r}!\binom{C}{t}}.

By changing the order of summation, we get

MN\displaystyle\frac{M}{N} =r~!(C1t1)i=1r~1(r1r~i)(Crtr~+i1)b=ir~1r~!(r~b)(r~b+1)r~!(Ct).\displaystyle=\frac{\tilde{r}!\binom{C-1}{t-1}-\sum\limits_{i=1}^{\tilde{r}-1}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}\sum\limits_{b=i}^{\tilde{r}-1}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)}}{\tilde{r}!\binom{C}{t}}.

By expanding and cancelling the terms, we obtain

b=ir~1r~!(r~b)(r~b+1)\displaystyle\sum_{b=i}^{\tilde{r}-1}\frac{\tilde{r}!}{(\tilde{r}-b)(\tilde{r}-b+1)} =r~!b=ir~1(1r~b1r~b+1)\displaystyle=\tilde{r}!\sum_{b=i}^{\tilde{r}-1}\left(\frac{1}{\tilde{r}-b}-\frac{1}{\tilde{r}-b+1}\right)
=r~!(11r~i+1)\displaystyle=\tilde{r}!\left(1-\frac{1}{\tilde{r}-i+1}\right)
=r~!r~ir~i+1.\displaystyle=\tilde{r}!\frac{\tilde{r}-i}{\tilde{r}-i+1}.

Therefore, we have the required expression in (6) as follows:

MN\displaystyle\frac{M}{N} =r~!(C1t1)r~!i=1r~1r~ir~i+1(r1r~i)(Crtr~+i1)r~!(Ct)\displaystyle=\frac{\tilde{r}!\binom{C-1}{t-1}-\tilde{r}!\sum\limits_{i=1}^{\tilde{r}-1}\frac{\tilde{r}-i}{\tilde{r}-i+1}\binom{r-1}{\tilde{r}-i}\binom{C-r}{t-\tilde{r}+i-1}}{\tilde{r}!\binom{C}{t}}
=tC1(Ct)i=1r~1r~ir(rr~i+1)(Crtr~+i1).\displaystyle=\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{\tilde{r}-1}\frac{\tilde{r}-i}{r}\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+i-1}.

\blacksquare

Appendix B

In this section, we show that for an MM corresponds to a t[Cr]t\in[C-r], as defined in (6), we can express the rate R(M)=(Cr)(1rMN)/(t+rr)R(M)=\binom{C}{r}\left(1-\frac{rM}{N}\right)/\binom{t+r}{r}. From (6), we have

MN=tC1(Ct)i=1r~1r~ir(rr~i+1)(Crtr~+i1).\frac{M}{N}=\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{\tilde{r}-1}\frac{\tilde{r}-i}{r}\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+i-1}.

Therefore, we get

rMN(Ct)=r(C1t1)\displaystyle\frac{rM}{N}\binom{C}{t}=r\binom{C-1}{t-1}-
i=1r~1(r~i)(rr~i+1)(Crtr~+i1).\displaystyle\qquad\qquad\qquad\sum\limits_{i=1}^{\tilde{r}-1}(\tilde{r}-i)\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+i-1}.

We expand the term

i=1r~1\displaystyle\sum\limits_{i=1}^{\tilde{r}-1} (r~i)(rr~i+1)(Crtr~+i1)\displaystyle(\tilde{r}-i)\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+i-1}
=i=1r~1(r~i+1)(rr~i+1)(Crtr~+i1)\displaystyle=\sum\limits_{i=1}^{\tilde{r}-1}(\tilde{r}-i+1)\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+i-1}-
i=1r~1(rr~i+1)(Crtr~+i1)\displaystyle\qquad\sum\limits_{i=1}^{\tilde{r}-1}\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+i-1}
=j=2r~j(rj)(Crtj)j=2r~(rj)(Crtj)\displaystyle=\sum\limits_{j=2}^{\tilde{r}}j\binom{r}{j}\binom{C-r}{t-j}-\sum\limits_{j=2}^{\tilde{r}}\binom{r}{j}\binom{C-r}{t-j}

where j=r~i+1j=\tilde{r}-i+1.

We can use the following identity to compute the above sum.

Lemma 2 ([38]).

Let n1,n2n_{1},n_{2} be arbitrary positive integers. If mm is a positive integer such that mn1+n2m\leq n_{1}+n_{2}. Then

k1+k2=mk1(n1k1)(n2k2)=mn1n1+n2(n1+n2m)\sum\limits_{k_{1}+k_{2}=m}k_{1}\binom{n_{1}}{k_{1}}\binom{n_{2}}{k_{2}}=\frac{mn_{1}}{n_{1}+n_{2}}\binom{n_{1}+n_{2}}{m}

where the summation ranges over all non-negative integers k1k_{1} and k2k_{2} such that k1n1k_{1}\leq n_{1}, k2n2k_{2}\leq n_{2} and k1+k2=mk_{1}+k_{2}=m.

Lemma 2 is a generalization of the Vandermonde identity:

k1+k2=m(n1k1)(n2k2)=(n1+n2m).\sum\limits_{k_{1}+k_{2}=m}\binom{n_{1}}{k_{1}}\binom{n_{2}}{k_{2}}=\binom{n_{1}+n_{2}}{m}. (49)

From Lemma 2, we have

j=2r~j(rj)(Crtj)=rtC(Ct)r(Crt1).\displaystyle\sum\limits_{j=2}^{\tilde{r}}j\binom{r}{j}\binom{C-r}{t-j}=\frac{rt}{C}\binom{C}{t}-r\binom{C-r}{t-1}.

Similarly, from (49), we get

j=2r~(rj)(Crtj)=(Ct)r(Crt1)(Crt).\displaystyle\sum\limits_{j=2}^{\tilde{r}}\binom{r}{j}\binom{C-r}{t-j}=\binom{C}{t}-r\binom{C-r}{t-1}-\binom{C-r}{t}.

Therefore, we have

rMN(Ct)\displaystyle\frac{rM}{N}\binom{C}{t} =r(C1t1){rtC(Ct)r(Crt1)\displaystyle=r\binom{C-1}{t-1}-\bigg{\{}\frac{rt}{C}\binom{C}{t}-r\binom{C-r}{t-1}-
((Ct)r(Crt1)(Crt))}\displaystyle\qquad\left(\binom{C}{t}-r\binom{C-r}{t-1}-\binom{C-r}{t}\right)\bigg{\}}
=r(C1t1)(rtC1)(Ct)(Crt)\displaystyle=r\binom{C-1}{t-1}-\left(\frac{rt}{C}-1\right)\binom{C}{t}-\binom{C-r}{t}
=(Ct)(Crt).\displaystyle=\binom{C}{t}-\binom{C-r}{t}.

Thus, we get

MN=1r(1(Crt)(Ct))=1r(1(Ct+r)(t+rt)(Cr)(Ct)).\frac{M}{N}=\frac{1}{r}\left(1-\frac{\binom{C-r}{t}}{\binom{C}{t}}\right)=\frac{1}{r}\left(1-\frac{\binom{C}{t+r}\binom{t+r}{t}}{\binom{C}{r}\binom{C}{t}}\right). (50)

By rearranging (50), we get

(Ct+r)(Ct)=(Cr)(1rMN)(t+rr).\frac{\binom{C}{t+r}}{\binom{C}{t}}=\frac{\binom{C}{r}\left(1-\frac{rM}{N}\right)}{\binom{t+r}{r}}.

We know that that R(M)=(Ct+r)/(Ct)R(M)=\binom{C}{t+r}/\binom{C}{t}. Therefore, we have the required rate expression

R(M)=(Cr)(1rMN)(t+rr).\displaystyle R(M)=\frac{\binom{C}{r}\left(1-\frac{rM}{N}\right)}{\binom{t+r}{r}}.

\blacksquare

Appendix C

In this section, we show that, if we substitute t=Cr+1t=C-r+1 in (6), we get MN=1r\frac{M}{N}=\frac{1}{r}. We have

MN\displaystyle\frac{M}{N} =tC1r(Cr~1)i=1r~1(r~i)(rr~i+1)(Crtr~+ri).\displaystyle=\frac{t}{C}-\frac{1}{r\binom{C}{\tilde{r}-1}}\sum\limits_{i=1}^{\tilde{r}-1}(\tilde{r}-i)\binom{r}{\tilde{r}-i+1}\binom{C-r}{t-\tilde{r}+r-i}.

a) Consider the case r~=r\tilde{r}=r. Then, we have

MN\displaystyle\frac{M}{N} =tC1r(Cr1)i=1r1(ri)(ri1)(Crri)\displaystyle=\frac{t}{C}-\frac{1}{r\binom{C}{r-1}}\sum\limits_{i=1}^{r-1}(r-i)\binom{r}{i-1}\binom{C-r}{r-i}
=Cr+1C1r(Cr1)i=1r1(ri)(ri1)(Crri).\displaystyle=\frac{C-r+1}{C}-\frac{1}{r\binom{C}{r-1}}\sum\limits_{i=1}^{r-1}(r-i)\binom{r}{i-1}\binom{C-r}{r-i}.

From Lemma 2, we have

i=1r1(ri)(ri1)(Crri)=(r1)(Cr)(Cr1)C.\sum\limits_{i=1}^{r-1}(r-i)\binom{r}{i-1}\binom{C-r}{r-i}=\frac{(r-1)(C-r)\binom{C}{r-1}}{C}.

Thus, we get

MN\displaystyle\frac{M}{N} =Cr+1C(r1)(Cr)(Cr1)rC(Cr1)=1r.\displaystyle=\frac{C-r+1}{C}-\frac{(r-1)(C-r)\binom{C}{r-1}}{rC\binom{C}{r-1}}=\frac{1}{r}.

b) Now, consider the case r~=t\tilde{r}=t. Then, we have

MN\displaystyle\frac{M}{N} =tC1r(Ct)i=1t1(ti)(rti+1)(Cri1)\displaystyle=\frac{t}{C}-\frac{1}{r\binom{C}{t}}\sum\limits_{i=1}^{t-1}(t-i)\binom{r}{t-i+1}\binom{C-r}{i-1}
=tCrtC(Ct)(Ct)r(Ct)\displaystyle=\frac{t}{C}-\frac{\frac{rt}{C}\binom{C}{t}-\binom{C}{t}}{r\binom{C}{t}} (51)
=1r\displaystyle=\frac{1}{r}

where (51) follows from Lemma 2 and (49). Thus we established that t=Cr+1t=C-r+1 corresponds to M/N=1/rM/N=1/r. \blacksquare

Appendix D

In this section, we consider the rate-memory trade-off of the (C,r,M,N(Cr))(C,r,M,N\geq\binom{C}{r}) combinatorial MACC scheme given by Scheme 1. We consider a general CC and an rC/2r\leq C/2, and consider the memory regime uN/CMvN/ruN/C\leq M\leq vN/r, for some positive real numbers u1u\geq 1 and v<1v<1. Notice that, for r=2r=2, we chose u=1.9u=1.9 and v=0.1424v=0.1424 (Case 2 in the proof of Theorem 6). For a t[Cr+1]t\in[C-r+1] and corresponding MM from (6), we have

R(M)\displaystyle R(M) =(Cr)(1rMN)(t+rr)\displaystyle=\frac{\binom{C}{r}(1-\frac{rM}{N})}{\binom{t+r}{r}}
C(C1)(Cr+1)(CMN+r)(CMN+r1)(CMN+1)\displaystyle\leq\frac{C(C-1)\dots(C-r+1)}{(\frac{CM}{N}+r)(\frac{CM}{N}+r-1)\dots(\frac{CM}{N}+1)}
(NM)r.\displaystyle\leq\left(\frac{N}{M}\right)^{r}. (52)

By substituting =N(sr)\ell=\lceil\frac{N}{\binom{s}{r}}\rceil in (8), we get

R(M)NsM1+N(sr).R^{*}(M)\geq\frac{N-sM}{1+\frac{N}{\binom{s}{r}}}. (53)

Substituting s=αN/Ms=\lfloor\alpha N/M\rfloor in (53), where v<α<1v<\alpha<1 (ensures that s[r:C]s\in[r:C]), yields

R(M)\displaystyle R^{*}(M) NαN/MM1+N(αN/Mr)NαN1+N(αN/Mr)\displaystyle\geq\frac{N-\lfloor\alpha N/M\rfloor M}{1+\frac{N}{\binom{\lfloor\alpha N/M\rfloor}{r}}}\geq\frac{N-\alpha N}{1+\frac{N}{\binom{\lfloor\alpha N/M\rfloor}{r}}}
1α1N+1(αN/Mr)1α1N+r!(αNM1)(αNM2)(αNMr)\displaystyle\geq\frac{1-\alpha}{\frac{1}{N}+\frac{1}{\binom{\lfloor\alpha N/M\rfloor}{r}}}\geq\frac{1-\alpha}{\frac{1}{N}+\frac{r!}{(\frac{\alpha N}{M}-1)(\frac{\alpha N}{M}-2)\dots(\frac{\alpha N}{M}-r)}}
(NM)r1α(NM)rN+r!(αMN)(α2MN)(αrMN)\displaystyle\geq\left(\frac{N}{M}\right)^{r}\frac{1-\alpha}{\frac{(\frac{N}{M})^{r}}{N}+\frac{r!}{(\alpha-\frac{M}{N})(\alpha-\frac{2M}{N})\dots(\alpha-\frac{rM}{N})}}
(NM)r1α(NM)r(Cr)+r!(αMN)(α2MN)(αrMN).\displaystyle\geq\left(\frac{N}{M}\right)^{r}\frac{1-\alpha}{\frac{(\frac{N}{M})^{r}}{\binom{C}{r}}+\frac{r!}{(\alpha-\frac{M}{N})(\alpha-\frac{2M}{N})\dots(\alpha-\frac{rM}{N})}}. (54)

From (53) and (54), we obtain

R(M)R(M)\displaystyle\frac{R(M)}{R^{*}(M)} (NM)r(Cr)+r!(αMN)(α2MN)(αrMN)1α\displaystyle\leq\frac{\frac{(\frac{N}{M})^{r}}{\binom{C}{r}}+\frac{r!}{(\alpha-\frac{M}{N})(\alpha-\frac{2M}{N})\dots(\alpha-\frac{rM}{N})}}{1-\alpha}
(NM)r(Cr)+r!(αrMN)r1α.\displaystyle\leq\frac{\frac{(\frac{N}{M})^{r}}{\binom{C}{r}}+\frac{r!}{(\alpha-\frac{rM}{N})^{r}}}{1-\alpha}. (55)

Now, we separately bound the two terms in (55) as follows:

(NM)r(Cr)(C/u)r(Cr)=r!ur(11C)(12C)(1r1C).\displaystyle\frac{(\frac{N}{M})^{r}}{\binom{C}{r}}\leq\frac{(C/u)^{r}}{\binom{C}{r}}=\frac{r!}{u^{r}(1-\frac{1}{C})(1-\frac{2}{C})\dots(1-\frac{r-1}{C})}.

Since rC/2r\leq C/2, we have

(NM)r(Cr)r!ur(12)r.\displaystyle\frac{(\frac{N}{M})^{r}}{\binom{C}{r}}\leq\frac{r!}{u^{r}(\frac{1}{2})^{r}}. (56)

The second term in (55) can be bounded as

r!(αrMN)rr!(αv)r\displaystyle\frac{r!}{(\alpha-\frac{rM}{N})^{r}}\leq\frac{r!}{(\alpha-v)^{r}} (57)

since MvN/rM\leq vN/r. By combining (56) and (57), we get

R(M)R(M)((2u)r+1(αv)r1α)r!.\displaystyle\frac{R(M)}{R^{*}(M)}\leq\left(\frac{(\frac{2}{u})^{r}+\frac{1}{(\alpha-v)^{r}}}{1-\alpha}\right)r!. (58)

From (58), we can approximately say that the gap grows in the order

R(M)R(M)czrr!\displaystyle\frac{R(M)}{R^{*}(M)}\leq cz^{r}r! (59)

where cc and z>1z>1 are constants. Also, note that by choosing α\alpha appropriately, we can bring zz close to 1 (though not arbitrarily close to 1). Thus, the optimality gap grows is by the (dominant) factor r!r!.

For vN/rMN/rvN/r\leq M\leq N/r, we have

R(M)(NM)r(1rMN).R(M)\leq\left(\frac{N}{M}\right)^{r}\left(1-\frac{rM}{N}\right). (60)

By substituting s=rs=r and =N\ell=N in (8), we get

R(M)1rMN.R^{*}(M)\geq 1-\frac{rM}{N}. (61)

Therefore, by combining (60) and (61), we obtain

R(M)R(M)(NM)r(rv)r\frac{R(M)}{R^{*}(M)}\leq\left(\frac{N}{M}\right)^{r}\leq\left(\frac{r}{v}\right)^{r} (62)

since N/Mr/vN/M\leq r/v. In the considered memory regime, the gap grows with rr in the order of (rv)r(\frac{r}{v})^{r}, where v<1v<1. \blacksquare

References

  • [1] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
  • [2] Q. Yu, M. A. Maddah-Ali and A. S. Avestimehr, “The exact rate-memory tradeoff for caching with uncoded prefetching,” in 2017 IEEE Int. Symp. Inf. Theory (ISIT), 2017, pp. 1613–1617.
  • [3] K. Wan, D. Tuninetti, and P. Piantanida, “An index coding approach to caching with uncoded cache placement,” IEEE Trans. Inf. Theory, vol. 66, no. 3, pp. 1318–1332, Mar. 2020.
  • [4] Z. Chen, P. Fan, and K. B. Letaief, “Fundamental limits of caching: Improved bounds for users with small buffers,” IET Commun., vol. 10, no. 17, pp. 2315–2318, Nov. 2016.
  • [5] M. M. Amiri and D. Gunduz, “Fundamental limits of coded caching: Improved delivery rate-cache capacity tradeoff,” IEEE Trans. Commun., vol. 65, no. 2, pp. 806–815, Feb. 2017.
  • [6] C. Tian and J. Chen, “Caching and delivery via interference elimination,” IEEE Trans. Inf. Theory, vol. 64, no. 3, pp. 1548–1560, Mar. 2018.
  • [7] J. Gómez-Vilardebó, “A novel centralized coded caching scheme with coded prefetching,” IEEE J. Sel. Areas Commun., vol. 36, no. 6, pp. 1165–1175, Jun. 2018.
  • [8] K. Zhang and C. Tian, “Fundamental limits of coded caching: From uncoded prefetching to coded prefetching,” IEEE J. Sel. Areas Commun., vol. 36, no. 6, pp. 1153–1164, Jun. 2018.
  • [9] J. Gómez-Vilardebó, “Fundamental limits of caching: Improved rate-memory trade-off with coded prefetching,” IEEE J. Sel. Areas Commun., vol. 66, no. 10, pp. 4488–4497, Oct. 2018.
  • [10] A. Sengupta, R. Tandon, and T. C. Clancy, “Improved approximation of storage-rate tradeoff for caching via new outer bounds,” in 2015 IEEE Int. Symp. Inf. Theory (ISIT), 2015, pp. 1691–1695.
  • [11] A. Sengupta and R. Tandon, “Improved approximation of storage-rate tradeoff for caching with multiple demands,” IEEE Trans. Commun., vol. 65, no. 5, pp. 1940–1955, May 2017.
  • [12] C.Y. Wang, S. H. Lim, and M. Gastpar, “A new converse bound for coded caching,” in Inf. Theory and Appl. Workshop (ITA), Jan. 2016, pp.1–6.
  • [13] C.Y. Wang, S. S. Bidokhti, and M. Wigger, “Improved converses and gap-results for coded caching,” in IEEE Int. Symp. Inf. Theory (ISIT), Jun. 2017, pp. 2428–2432.
  • [14] H. Ghasemi and A. Ramamoorthy, “Improved lower bounds for coded caching,” IEEE Trans. Inf. Theory, vol. 63, no. 7, pp. 4388–4413, Jul. 2017.
  • [15] J. Hachem, N. Karamchandani, and S. N. Diggavi, “Coded caching for multi-level popularity and access,” IEEE Trans. Inf. Theory, vol. 63, no.5, pp. 3108–3141, May 2017.
  • [16] E. Parrinello, A. Ünsal and P. Elia, “Fundamental limits of coded caching with multiple antennas, shared caches and uncoded prefetching,” IEEE Trans. Inf. Theory, vol. 66, no. 4, pp. 2252–2268, Apr. 2020.
  • [17] K. S. Reddy and N. Karamchandani, “Rate-memory trade-off for multiaccess coded caching with uncoded placement,” IEEE Trans. Commun., vol. 68, no. 6, pp. 3261–3274, Jun. 2020.
  • [18] B. Serbetci, E. Parrinello, and P. Elia, “Multi-access coded caching: gains beyond cache-redundancy,” in IEEE Inf. Theory Workshop (ITW), 2019, pp. 1–5.
  • [19] K. S. Reddy and N. Karamchandani, “Structured index coding problems and multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2020, pp. 1–5.
  • [20] A. A. Mahesh and B. S. Rajan, “A coded caching scheme with linear sub-packetization and its application to multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2020, pp. 1–5.
  • [21] S. Sasi and B. S. Rajan, “Multi-access coded caching scheme with linear sub-packetization using PDAs,” in 2021 IEEE Int. Symp. Inf. Theory (ISIT), 2021, pp. 861–866.
  • [22] M. Cheng, K. Wan, D. Liang, M. Zhang, and G. Caire, “A novel transformation approach of shared-link coded caching schemes for multiaccess networks,” IEEE Trans. Commun., vol. 69, no. 11, pp. 7376–7389, Nov. 2021.
  • [23] K. K. K. Namboodiri and B. S. Rajan, “Multi-access coded caching with coded placement,” in IEEE Wireless Commun. Netw. Conf. (WCNC), 2022, pp. 2274–2279
  • [24] K. K. K. Namboodiri and B. S. Rajan, “Improved lower bounds for multi-access coded caching,” IEEE Trans. Commun., vol. 70, no. 7, pp. 4454–4468, Jul. 2022.
  • [25] K. K. K. Namboodiri and B. S. Rajan, “Multi-access coded caching with secure delivery,” in IEEE Inf. Theory Workshop (ITW), 2021, pp. 1–6.
  • [26] K. K. K. Namboodiri and B. S. Rajan, “Multi-access coded caching with demand privacy,” in IEEE Wireless Commun. Netw. Conf. (WCNC), 2022, pp. 2280–2285.
  • [27] D. Katyal, P. N. Muralidhar, and B. S. Rajan, “Multi-access coded caching schemes from cross resolvable designs,” IEEE Trans. Commun., vol. 69, no. 5, pp. 2997–3010, May 2021.
  • [28] P. N. Muralidhar, D. Katyal, and B. S. Rajan, “Improved multi-access coded caching schemes from cross resolvable designs,” in IEEE Inf. Theory Workshop (ITW), 2021, pp. 1–6.
  • [29] N. Das and B. S. Rajan, “Multi-access coded caching schemes from maximal cross resolvable designs,” in IEEE Int. Symp. Inf. Theory (ISIT), 2022, pp. 1719–1724.
  • [30] P. N. Muralidhar, D. Katyal, and B. S. Rajan, “Maddah-Ali-Niesen scheme for multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2021, pp. 1–6.
  • [31] F. Brunero and P. Elia, “Fundamental limits of combinatorial multi-access caching,” IEEE Trans. Inf. Theory, 2022 (Early Access).
  • [32] M. Chinnapadamala and B. S. Rajan, “Security and privacy in cache-aided linear function retrieval for multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2022.
  • [33] M. Ji, A. M. Tulino, J. Llorca, and G. Caire, “Caching in combination networks,” in 49th Asilomar Conf. Signals Syst. Comput., 2015, pp. 1269–1273.
  • [34] A. A. Zewail and A. Yener, “Coded caching for combination networks with cache-aided relays,” in IEEE Int. Symp. Inf. Theory (ISIT), 2017, pp. 2433–2437.
  • [35] N. Mital, D. Gündüz, and C. Ling, "Coded caching in a multi-server system with random topology," IEEE Trans. Commun., vol. 68, no. 8, pp. 4620–4631, Aug. 2020.
  • [36] F. J. MacWilliams and N. J. A. Sloane, “The theory of error-correcting codes”, Elsevier, 1977.
  • [37] T. S. Han, “Nonnegative entropy measures of multivariate symmetric correlations,” Inf. Control, 36(2):133–156, 1978.
  • [38] R. Meštrović, “Several generalizations and variations of Chu-Vandermonde identity,” available in arXiv:1807.10604[math.CO], Jul. 2018.
  • [39] K. K. K. Namboodiri and B. S. Rajan, “Combinatorial multi-access coded caching: improved rate-memory trade-off with coded placement,” in IEEE Inf. Theory Workshop (ITW), 2023.
K. K. Krishnan Namboodiri (Graduate Student Member, IEEE) was born in Kerala, India. He received the B.Tech. degree in electronics and communication engineering from the National Institute of Technology, Calicut, in 2019. He is currently pursuing the Ph.D. degree with the Department of Electrical Communication Engineering, Indian Institute of Science, Bengaluru. His primary research interests include coded caching, index coding, information theory and wireless communication. He is a recipient of the Prime Minister’s Research Fellowship (2021).
B. Sundar Rajan (Life Fellow, IEEE) was born in Tamil Nadu, India. He received the B.Sc. degree in mathematics from Madras University, Madras, India, in 1979, the B.Tech. degree in electronics from the Madras Institute of Technology, Madras, in 1982, and the M.Tech. and Ph.D. degrees in electrical engineering from IIT Kanpur, Kanpur, in 1984 and 1989, respectively. He was a Faculty Member at the Department of Electrical Engineering, IIT Delhi, New Delhi, from 1990 to 1997. He has been a Professor with the Department of Electrical Communication Engineering, Indian Institute of Science, Bengaluru, since 1998. His primary research interests include space-time coding for MIMO channels, distributed space-time coding and cooperative communication, coding for multiple-access and relay channels, and network coding. Dr. Rajan is a J. C. Bose National Fellow (2016–2025) and a member of the American Mathematical Society. He is a fellow of the Indian National Academy of Engineering, the Indian National Science Academy, the Indian Academy of Sciences, and the National Academy of Sciences, India. He was a recipient of Prof. Rustum Choksi Award by IISc for Excellence in Research in Engineering in 2009, the IETE Pune Center’s S. V. C. Aiya Award for Telecom Education in 2004, and the Best Academic Paper Award at IEEE WCNC in 2011. He served as the Technical Program Co-Chair for IEEE Information Theory Workshop (ITW’02) held in Bengaluru, in 2002. He was an Editor of IEEE Wireless Communications Letters (2012–2015) and IEEE Transactions on Wireless Communications (2007–2011), and an Associate Editor of Coding Theory for IEEE Transactions on Information Theory (2008–2011 and 2013–2015).