Combinatorial Multi-Access Coded Caching: Improved Rate-Memory Trade-off with
Coded Placement
Abstract
This work considers the combinatorial multi-access coded caching problem introduced in the recent work by Muralidhar et al. [P. N. Muralidhar, D. Katyal, and B. S. Rajan, “Maddah-Ali-Niesen scheme for multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2021] The problem setting consists of a central server having a library of files and caches each with capacity . Each user in the system can access a unique set of caches, and there exist users corresponding to every distinct set of caches. Therefore, the number of users in the system is . For the aforementioned combinatorial multi-access setting, we propose a coded caching scheme with an MDS code-based coded placement. This novel placement technique helps to achieve a better rate in the delivery phase compared to the optimal scheme under uncoded placement when . For a lower memory regime, we present another scheme with coded placement, which outperforms the optimal scheme under uncoded placement if the number of files is no more than the number of users. Further, we derive an information-theoretic lower bound on the optimal rate-memory trade-off of the combinatorial multi-access coded caching scheme. In addition, using the derived lower bound, we show that the first scheme is optimal in the higher memory regime, and the second scheme is optimal if . Finally, we show that the performance of the first scheme is within a constant factor of the optimal performance, when .
Index Terms:
Coded caching, coded placement, combinatorial multi-access network, lower bound, rate-memory trade-offI Introduction
With smartphones, mobile applications and expanded connectivity, people are consuming more video content than ever. This increase in mobile video consumption is taking a toll on the internet, which has seen data traffic increase many-fold in a few years. However, the high temporal variability of on-demand video services leaves the network underutilized during off-peak hours. Utilizing the channel resources by employing low-cost cache memory to store contents at the user end during off-peak times is an effective way to ease the network traffic congestion in peak hours. In the conventional caching scheme, the users’ demands are met by filling the caches during the placement phase (in off-peak hours without knowing the user demands) and by transmitting the remaining requested file contents in the uncoded form during the delivery phase (in the peak hours after knowing the user demands). In coded caching, introduced in [1], it was shown that by employing coding in the delivery phase among the requested contents, it is possible to bring down the network load further compared to the conventional caching scheme. In [1], Maddah-Ali and Niesen considered a single server broadcast network, where the server is connected to users through an error-free shared link. The server has a library of files, and the user caches have a capacity of files, where (since each user is equipped with a dedicated cache memory, we refer to this network as the dedicated cache network). During the placement phase, the server stores file contents (equivalent to files) in the caches without knowing the user demands. In the delivery phase, each user requests a single file from the server. In the delivery phase, the server makes coded transmissions so that the users can decode their demanded files by making use of the cache contents and the coded transmissions. The goal of the coded caching problem is to jointly design the placement and the delivery phases such that the rate (the size of transmission) in the delivery phase is minimized. The scheme in [1] achieved a rate , which is later shown to be the optimal rate under uncoded placement if [2, 3]. In [4], it was shown that by employing coding among the contents stored in the placement phase in addition to the coding in the delivery phase, a better rate-memory trade-off could be achieved. By using coded placement, further improvement in the rate was obtained by the schemes in [5, 6, 7, 8, 9]. The works in [10, 11, 12, 13, 14], came up with different converse bounds for the optimal rate-memory trade-off of the coded caching scheme for the dedicated cache network.
Motivated by different practical scenarios, coded caching was extended to various network models, including multi-access networks [15], shared cache networks [16] etc. In the multi-access coded caching (MACC) scheme proposed in [15], the number of users, was kept to be the same as the number of caches. Further, each user was allowed to access neighbouring caches in a cyclic wrap-around fashion. The coded caching scheme under the cyclic-wrap model was studied in [17, 18, 19, 20, 21, 22]. Those works proposed different achievable schemes, all restricted to uncoded cache placement. In [23], the authors proposed a coded caching scheme with an MDS code-based coded placement technique and achieved a better rate-memory trade-off in the low memory regime compared to the schemes with uncoded placement. Further, by deriving an information-theoretic converse, the scheme in [23] was shown to be optimal in [24]. The MACC problem (in the cyclic wrap-around network) was studied by incorporating security and privacy constraints in [25, 26]. The first work that considered a multi-access network with more users than the caches is [27]. The authors established a connection between the MACC problem and design theory, and obtained classes of MACC schemes from resolvable designs. The works in [28, 29] also relied on design theory to obtain MACC schemes.
In this paper, we consider the combinatorial multi-access network model introduced in [30], which consists of caches and users, where is the number of caches accessed by a user. The combinatorial MACC scheme presented in [30] achieves the rate at cache memory , where . Here onwards, we refer to the scheme in [30] as the MKR scheme. The optimality of the MKR scheme under uncoded placement was shown in [31]. In addition to that, in [31], the system model in [30] was generalized to a setting where more than one user can access the same set of caches. In [32], the authors addressed the combinatorial MACC problem with privacy and security constraints.
Similar network topologies were considered in various cache aided settings, including two-hop networks [33, 34] and multi-server networks [35]. The two-hop network considered in [33] and [34] consists of a server and users connected via a set of cache-aided relay nodes. Each user is connected to a distinct set of relay nodes, where , and thus the number of user is . Those users also have dedicated caches. The scheme in [33] followed an uncoded random placement strategy, whereas the scheme in [34] relied on MDS-coded placement. The main difference between the combinatorial MACC scheme and the coded caching scheme for the two-hop network is in the delivery phase. In the combinatorial MACC setting, the server communicates directly to the users over an error-free broadcast link. However, in two-hop networks, the server responds to the users’ requests by transmitting signals to each of the relay nodes. Then, each relay node utilizes its received signal and its cached contents to transmit unicast signals to each of its connected end users. In [35], a network model with servers and cache-aided users is considered. Each user can get connected to a random set of servers, where . In order to ensure that any servers collectively store the entire file library, the file contents are stored in the servers after an MDS encoding.

I-A Contributions
In this paper, we study the combinatorial multi-access setting introduced in [30] and make the following technical contributions:
-
•
In [30], the authors proposed a combinatorial MACC scheme (MKR scheme) that achieves the optimal rate under uncoded placement. However, in the MKR scheme, a user gets multiple copies of the same subfile from different caches that it accesses. In order to remove that redundancy, we introduce a novel, MDS code-based coded placement technique. By employing coded placement, we ensure that using the accessible cache contents, a user can decode all the subfiles that the corresponding user in the MKR scheme (by corresponding user, we mean that the user accessing the same set of caches) accesses in the uncoded form. The coded subfiles transmitted in the delivery phase of our proposed scheme (Scheme 1) remain the same as the transmissions in the MKR scheme. Thereby, our proposed scheme in Theorem 1 achieves the same rate as the MKR scheme at a lesser cache memory value. For , Scheme 1 outperforms the MKR scheme in terms of the rate achieved.
-
•
For , Scheme 1 has the same rate as the MKR scheme. Thus we present another achievability result for the combinatorial MACC scheme in Theorem 2. The new coding scheme (Scheme 2) achieves the rate for , which is strictly less than the rate achieved by the optimal scheme with uncoded placement when the number of files with the server is no more than the number of users in the system. In effect, by employing coding in the placement phase, we brought down the rate (except at ) further compared to the optimal scheme under uncoded placement.
-
•
We derive an information-theoretic lower bound on the optimal rate-memory trade-off of the combinatorial MACC scheme (Theorem 3). To obtain this lower bound, we consider a scenario of decoding the entire server library at the user-end using multiple transmissions. In order to bound the uncertainty in those transmissions, we make use of an information-theoretic inequality (Han’s inequality) given in [37]. Han’s inequality was originally used in the context of coded caching in [10, 11]. Our bounding technique is similar to the approach used in [10] and [24], where lower bounds were derived for the dedicated cache scheme and the MACC scheme with cyclic wrap-around, respectively. While deriving the lower bound, we do not impose any restriction on the placement phase. Thus, the lower bound is applicable even if the scheme employs coding in the placement phase.
-
•
By using the derived lower bound, we show that Scheme 1 is optimal for higher values of , specifically when (Theorem 4). Further, we show that Scheme 2 is optimal when (Theorem 5). In addition, we show that when and , the rate-memory trade-off given by Scheme 1 is within a constant multiplicative factor, 21, of the information-theoretic optimal rate-memory trade-off (Theorem 6).
I-B Notations
For a positive integer , denotes the set . For two positive integers such that , . For two non-negative integers , we have , if , and if . Uppercase letters (excluding the letters and ) are used to denote random variables (the letters and are reserved for denoting system parameters). The set of random variables is denoted as . Calligraphic letters are used to denote sets. Further, for a set of positive integers , represents a set of random variables indexed by the elements in (for instance, ). Bold lowercase letters represent vectors, and bold uppercase letters represent matrices. The identity matrix of size is denoted as . Further, represents the finite field of size . Finally, for a real number , .
II System Model and Problem Formulation
The system model as shown in Fig. 1 consists of a central server with a library of independent files, , each of size 1 unit (we assume that 1 unit of a file is constituted by symbols from the finite field of size )111We assume that is such that all MDS codes considered in this work exist over .. There are caches, each with capacity units (). i.e., each cache can store contents equivalent to files. There are users in the system, and each of them can access out of the caches, where . We assume that there exists a user corresponding to any choice of caches from the total caches. Thus the number of users in the system is . Further, each user is denoted with a set , where such that . That is, a user is indexed with the set of caches that it accesses. In other words, user has access to all the caches in the set . A system under the aforementioned setting is called the combinatorial multi-access network. The coded caching scheme under this model was introduced in [30].
The combinatorial MACC scheme works in two phases, the placement phase and the delivery phase. In the placement phase, the server populates the caches with the file contents. The cache placement is done without knowing the demands of the users. The placement can be either coded or uncoded. By uncoded placement, we mean that files are split into subfiles and kept in the caches as such, while coded placement means that coded combinations of the subfiles are allowed to be kept in the caches. The number of subfiles into which a file is split is termed the subpacketization number. The contents stored in cache , , is denoted as . In the delivery phase, user requests file from the server, where , and the demand vector . In the demand vector, we arrange the users (subsets of size ) in the lexicographic order. Corresponding to the demand vector, the server makes a transmission of size units. We assume that the broadcast channel from the server to the users is error-free. The non-negative real number is said to be the rate of transmission. Finally, user should be able to decode using transmission and the accessible cache contents . That is, for such that , we have
(1) |
where .
Definition 1.
For the combinatorial MACC scheme, a rate is said to be achievable if the scheme satisfies (1) with a rate less than or equal to for every possible demand vector. Further, the optimal rate-memory trade-off is
(2) |
The coded caching problem aims to design the placement and the delivery phases jointly so that the rate is minimized. In this work, we present two achievability results and a lower bound on .
III Preliminaries
In this section, we discuss the preliminaries required for describing the coding scheme as well as the converse bound.
III-A Review of the combinatorial MACC scheme in [30] (MKR scheme)
In the sequel, we describe the combinatorial MACC scheme presented in [30].
In the placement phase, the server divides each file into non-overlapping subfiles of equal size, where . The subfiles are indexed with -sized subsets of . Therefore, we have, for all . The server fills cache as follows:
(3) |
for every . According to the above placement, the server stores subfiles of all the files in a cache. Thus, we have, . Now, assume that user demands file from the server. In the delivery phase, the server makes coded transmissions corresponding to every such that . The server transmission is as follows:
(4) |
Therefore, the number of coded subfiles transmitted is , where each coded subfile is of a file size. Thus the rate of transmission is .
Notice that user gets subfile for all from the cache placement if . Now, let us see how does the user get subfile if . Consider the transmission corresponding to the -sized set . In the coded message
user has access to for every , since . Therefore, user can decode the demanded file .
III-B Maximum distance separable (MDS) codes [36]
An maximum distance separable (MDS) code is an erasure code that allows recovering the message/information symbols from any out of the coded symbols. Consider a systematic MDS code (over the finite field ) generator matrix . Then, we have
where the message vector .
III-C Han’s Inequality [37]
To derive a lower bound on , we use the following lemma which gives an inequality on subset of random variables.
Lemma 1 (Han’s Inequality [37]).
Let be a set of random variables. Further, let and denote subsets of such that and with . Then, we have
(5) |
where and denote the set of random variables indexed by the elements in and , respectively.
III-D Motivating Example
Even though the MKR scheme is optimal under uncoded placement, with an example, we show that further reduction in rate is possible with the help of coded placement.

Consider the combinatorial multi-access network in Fig. 2. There are users in the system, each denoted with a set , where such that . Let us first see the MKR scheme, where the placement is uncoded. In the placement phase, the server divides each file into 6 non-overlapping subfiles of equal size. Further, each subfile is denoted with a set , where such that . Thus, we have for all . Note that, even though the subfile indices are sets, we omitted the curly braces for simplicity. The server populates the cache as follows:
The contents stored in the caches are given in Table I. From the cache placement, user has access to all the subfiles (of every file) except those indexed with 2-sized subsets of . For example, user has access to all the subfiles except subfile .
Cache 1 | Cache 2 | Cache 3 | Cache 4 |
Cache 1 | Cache 2 | Cache 3 | Cache 4 |
In the delivery phase, the users reveal their demands. Let be the file demanded by user . Then the server transmits . The claim is that all the users will get their demanded file from the above placement and delivery phases. For example, user has access to all the subfiles except subfile . However, user can get by subtracting the remaining subfiles from the transmitted message. Similarly, it can be verified that all the users will get their demanded files. Thus the rate achieved is 1/6. This achieved rate is, in fact, optimal under uncoded placement [31]. Notice that every subfile is cached in two caches. For instance, subfile is cached in cache 1 and cache 2. Thus user who accesses those two caches get multiple copies of the same subfile. This leads to the question of whether we can do better than the MKR scheme.
Now, we employ a coded placement technique and achieve the same rate 1/6 at . The coding scheme is described hereinafter. The server divides each file into 6 subfiles of equal size, and each subfile is indexed with a set , where such that . Thus, we have for all . Further, each subfile is divided into 2 mini-subfiles both having half of a subfile size, we have for all subfiles. The cache placement is summarized in Table II. Each cache contains 5 mini-subfiles (3 uncoded and 2 coded mini-subfiles), each of size of a file size. Thus, the size of the caches is .
Let be the file demanded by user in the delivery phase. Then the server transmits . Notice that, this coded message is the same as the transmitted message in the MKR scheme. Thus the rate achieved is also the same as the MKR scheme, which is 1/6. Now, it remains to show that the users can decode their demanded files. Since our scheme follows the same delivery policy as the MKR scheme, it is enough to show that a user in our scheme can decode the subfiles, which are accessible for the corresponding user (user accessing the same set of caches) in the MKR scheme. Consider user who accesses the cache for all . Consider user accessing cache 1 and cache 2. The user gets from cache 1 and from cache 2, and thus it obtains the subfile for all . Using , the user can decode and from cache 1. Similarly, from cache 2, the subfiles and can be decoded using . That means, user obtains the subfiles and for every . From Table II, it can be verified that a user gets all the subfile such that . Thus a user can decode the subfiles, which are accessible for the corresponding user in the MKR scheme in the placement phase. In essence, by employing a coded placement technique, we could achieve the same rate, but for a smaller cache memory value.
Next, we show that for the coded caching scheme, it is possible to meet all the user demands without the server transmission. In other words, for the coded caching scheme, rate is achievable. In the placement phase, each file is divided into 2 subfiles of equal size, for all . Then encode with a MDS code generator matrix . Thus, we have the coded subfiles
Then the server fills the caches as follows: , , , and . From this placement, one user gets two coded subfiles from the caches (one from each cache). Since, any two columns of matrix are independent, users can decode all the files, without further transmissions.
IV Main Results
In this section, we present two achievability results for the combinatorial MACC scheme. Further, we derive an information-theoretic lower bound on the optimal rate-memory trade-off, .
Theorem 1.
Let . Then, for the combinatorial MACC scheme, the rate is achievable at cache memory
(6) |
where .
In Section V-A, for the combinatorial MACC problem, we explicitly present a coding scheme making use of coded placement. As we have seen in the example in Section III-D, coding in the placement phase helps to avoid redundancy in the cached contents (accessed by a user) in the MKR scheme. The example shows that by storing fewer coded subfiles in the placement phase, a user can decode the same subfiles as the corresponding user in the MKR scheme (by corresponding user, we mean that the user accessing the same set of caches) gets in the uncoded form. In other words, it is possible to achieve the same rate as the MKR scheme by using a lesser value of cache memory by employing coding in the placement. For a cache memory value as defined in (6) corresponds to a , we can modify the rate expression as (proof is given in Appendix B)
(7) |
Also, the parameter corresponds to (proof is provided in Appendix C) and rate, . Therefore, the rate expression (7) is valid for every . For a general , a lower convex envelope of these points is achievable via memory sharing technique. In the rate expression, the term shows that a user can access or decode fraction of every file from the caches that it accesses and requires only the remaining portion of the required file from the delivery phase. Further, the denominator is the number of users simultaneously served by a coded transmission in the delivery phase.
The parameter in the MKR scheme represents the number of times the entire server library is duplicated among the caches. In the MKR scheme, user gets copies of a subfile . Therefore, a user gets a different number of copies of different subfiles depending upon the cardinality of the intersection between the user index set and the subfile index set. Our objective is to design a coded placement phase such that from the accessible cache contents, a user should be able to decode the subfiles that the corresponding user in the MKR scheme gets in the uncoded form. The main challenge of designing such a placement phase is that a user should be able to decode all the subfiles with a non-zero intersection between the user index set and the subfile index set. To tackle this challenge, we employ a two-level MDS encoding. First, we encode the subfiles with an MDS code generator matrix. Then, in each round of the placement phase, a few selected coded subfiles are encoded again with another MDS code generator matrix. This two-level encoding technique is different from the MDS code-based coded placement strategies followed in the schemes in [34, 35] for two-hop networks and multi-server networks, respectively. In both cases, the entire file library was encoded with an MDS code. In our case, user , should be able to decode all the subfiles such that , from the accessible cache contents. To ensure that, in general, a single-level encoding is not sufficient. Our first-level MDS encoding duplicates the file library, whereas our second-level MDS encoding ensures that the users can decode the required subfiles from the cache contents.
Further, note that the parameter is the maximum value of for any and . In other words, is the maximum number of copies of a single subfile that is accessed by a user in the MKR scheme. Therefore, our placement phase depends upon . In fact, the placement phase consists of rounds. i.e, Round 0, Round 1, , Round . The transmissions in our delivery phase is the same as that of the MKR scheme. Thus the coded placement phase should enable user to decode all the subfiles with from the content stored in the accessible caches. Let us denote the content stored in cache in Round as , where and . Then, we design our placement phase such that user be able to decode a subfile with using for every . For instance, using alone (for every ), the user can decode if . In further decoding, the user also uses the already decoded subfiles. Thus the cache content decoding happens sequentially. First, the user decodes the subfiles with , then the subfiles with and so on. Finally, the user can decode the subfiles with .
When , there is no duplication in cached contents, and thus the contents accessed by a user from different caches are distinct. Therefore, performance improvement by coding is available when . Further, the placement phase of the MKR scheme is independent of , whereas our scheme takes into consideration during the cache placement. Also, our scheme needs to divide each file into , whereas the MKR scheme needs a lesser subpacketization of .
The MKR scheme achieves the rate at , which is optimal under uncoded placement. Since is always positive for , we achieve the same rate at . This advantage is enabled with the help of coding in the placement phase. If , we have cache memory . For , the scheme in Theorem 1 has the same rate-memory trade-off as the MKR scheme. For every , our proposed scheme performs strictly better than the MKR scheme in terms of the rate achieved.
Remark 1.
To quantify the improvement brought in by Theorem 1 compared to the MKR scheme, we consider two memory points: a) corresponding to (a lower cache memory value), and b) corresponding to (a higher cache memory value).
a) The parameter corresponds to . The corresponding rate from Theorem 1 is . But, the MKR scheme achieves the rate . Therefore, at , we achieve a rate reduction by a multiplicative factor
For a fixed , this multiplicative factor increases as the access degree increases.
b) The parameter corresponds to . We compare the rate in Theorem 1 with a benchmark scheme obtained by combining the rate-memory trade-off of the MKR scheme and the memory-rate pair (for the cyclic wrap-around MACC scheme, the achievability of is shown in [15]). Note that, for the combinatorial MACC scheme, the memory-rate pair is achievable only with coded placement. For the ease of comparison, we assume that is an integer. Then, we have the rate achieved by the scheme in Theorem 1, at . At the same , the benchmark scheme achieves a rate . Therefore, we have the following multiplicative reduction factor at ,
where since . That is, in general at the considered normalized cache memory value.
Next, we present a combinatorial MACC scheme that performs better than the MKR scheme in the lower memory regime. In the following theorem, we present an achievability result for .
Theorem 2.
For the combinatorial MACC scheme with , the rate is achievable for .
When the number of files is such that , the rate is strictly less than the rate of the MKR scheme for .
Remark 2.
The scheme in Theorem 2 achieves the rate at , if . At the same time the optimal rate under uncoded placement is . Therefore, when , we have
at . That is, the rate reduction obtained from coded placement diminishes as the number of caches in the system increases.
Now, we present a lower bound on the optimal rate-memory trade-off for the combinatorial MACC scheme.
Theorem 3.
For the combinatorial MACC scheme
(8) |
where .
The lower bound in Theorem 3 has two parameters: , which is related to the number of caches, and , which is associated with the number of transmissions. To obtain this lower bound, we consider caches and users who access caches only from those caches. From number of transmissions, all the files can be decoded at the users’ end by appropriately choosing the demand vectors. Further, we split the total transmissions into transmissions and the remaining transmissions. Then, we bound the uncertainty in those two cases separately. This bounding technique is similar to the approach used in [10] and [24], where lower bounds were derived for the dedicated cache scheme and the MACC scheme with cyclic wrap-around, respectively.
In Fig. 3, we plot the rate-memory trade-off in Theorem 1 along with that of the MKR scheme. For ( gives ), both the schemes have the same rate-memory trade-off. However, for , our scheme has a strictly lower rate compared to the MKR scheme. It is worth noting that, in order to achieve , it is sufficient to have , whereas, in the optimal scheme with uncoded placement, is required to achieve . In Fig. 4, we compare the performance of the combinatorial MACC scheme under uncoded and coded placements. The rate-memory trade-off of the optimal scheme under uncoded placement (MKR scheme) is denoted as , whereas is obtained from Theorem 1 and Theorem 2, where the placement is coded. For the combinatorial MACC scheme, the improvement in rate using coded placement can be observed from Fig. 5. Also, notice that the rate-memory trade-off in Theorem 1, Theorem 2 are optimal when , and , respectively.



Using the lower bound in Theorem 3, we show the following optimality results. First, we prove that the rate-memory trade-off in Theorem 1 is optimal at a higher memory regime.
Theorem 4.
For the combinatorial MACC scheme
(9) |
for .
The optimality is shown towards the higher memory regime. When , the users would be requiring only a single subfile from the delivery phase and the server can meet that by a single coded transmission. Also, it is worth noting that in our proposed scheme (Scheme 1), the delivery phase is optimal under the placement that we followed. Because the users in our proposed scheme and the users in the MKR scheme obtain the same subfiles from the respective placement phases. Thus, if we could design a better delivery phase, the same would apply to the MKR scheme. However, that is impossible since the MKR scheme is optimal for the placement employed. As the dedicated coded caching scheme (introduced in [1]), the optimal rate-memory trade-off of the combinatorial MACC scheme is also open.
In the lower memory regime, when , the rate-memory trade-off in Theorem 1 is clearly not optimal, since the scheme in Theorem 2 achieves a better rate. Now, we show that the rate-memory trade-off in Theorem 2 is optimal if the number of users is not more than the number of files.
Theorem 5.
For the combinatorial MACC scheme with , we have
(10) |
for .
In the proposed scheme, coded subfiles are kept in each cache (for ) in the placement phase. However, the delivery phase is uncoded. That is, the server transmits uncoded subfiles in the delivery phase. A user receives subfiles (of the demanded file) directly, and the remaining subfiles need to be decoded from the accessible caches. A user requires subfiles from the delivery phase to decode a subfile of the demanded file from an accessible cache. That is, using coded subfiles in a cache and uncoded subfiles from the delivery phase, a user can decode subfiles, out of which only one is required for the user. Therefore, the ratio between the number of coded subfiles stored in a cache and the number of subfiles required to decode a subfile from a cache increases with . Thus the optimality of the scheme is limited to the case where .
Next, we show that the rate-memory trade-off of Scheme 1 is within a constant multiplicative factor from the optimal rate-memory trade-off, when and the number of files with the server is not less than the number of users in the system.
Theorem 6.
For the combinatorial MACC scheme with , we have
(11) |
where is the achievable rate as defined in (7).
In order to prove the optimality gap result, we divide the entire memory regime into three regions. For a lower memory regime (for ), we approximate the rate to . In the second regime, where , we approximate the rate to (note that, ). Finally, for , we approximate the rate . To obtain the optimality gap result, we compute the ratio of these approximate rate-memory trade-off to the lower bound on in Theorem 3. Due to these approximations and the analytical bounding techniques used, the optimality gap of 21 in Theorem 6 is loose. From numerical simulations, for the combinatorial MACC scheme, we have
Also, the numerical simulations suggested that the multiplicative gap between the rate-memory trade-off given by Scheme 1 and the lower bound on given in Theorem 3 increases as increases. However, providing an analytical result on the optimality gap becomes increasingly hard with increasing because of the optimization problem in the lower bound expression (8).
Remark 3.
To prove the optimality gap result for the combinatorial MACC scheme with , we divide the entire parameter regime into 3 cases and further into several sub-cases. For a general , due to the optimization problem in the lower bound (8) and the presence of two variables in the rate expression, namely and , other than and make an analytical derivation of gap result increasingly hard. Also, numerical simulations suggest that the multiplicative factor (gap) between given by Scheme 1 and the lower bound on given in Theorem 3 increases as increases. Additionally, for the combinatorial MACC scheme with , for some and , we have
where are constants (proof is given in Appendix D). Similarly, for , we have shown in Appendix D that
Numerical simulations suggested that the multiplicative gap of 11 for increases to 15 and 55 as increases to 3 and 5, respectively. This indicates that the order provided by the analysis in Appendix D is loose. This is because of the fact that for a general , we are unable to optimize the lower bound over the two variables, and . However, the gap results indicate a possibility of the existence of either an achievable scheme with an additional gain that scales with or a better lower bound, which is higher than the lower bound in Theorem 3, approximately by a factor . In addition to that, if a better scheme exists, then that implies that a coded caching scheme with coded placement can perform better than the optimal scheme with uncoded placement by a multiplicative factor, which is not a constant and scales with .
V Proofs
V-A Proof of Theorem 1
In this section, we present a coding scheme (we refer to this scheme as Scheme 1) that achieves the rate at cache memory (corresponding to every ) as given in (6). First, let us consider the case (for , we present a separate coding scheme at the end of this proof).
a) Placement phase: The server divides each file into non-overlapping subfiles of equal size. Each subfile is indexed with a -sized set . We assume to be an ordered set. i.e., with . Also, whenever an ordering among the subfiles -indexed with sized subsets of - is required, lexicographic ordering is followed. We have the file for every . Let us define . Each subfile is further divided into mini-subfiles. The subfile is divided as follows:
Let be a generator matrix of an MDS code. For every and for every such that , the server encodes the mini-subfiles with and obtains the coded mini-subfiles
(12) |
Now, for an integer , and a set , such that , we define a function . The function gives the position of in the ordered set of size . If , then .
The placement phase consists of rounds- Round 0, Round 1, , Round . The content stored in cache in Round is denoted as , where . In Round 0, the server fills cache as
where . Notice that coded mini-subfiles (of all the files) are placed in all the caches in Round 0.
Now let us focus on the cache placement in Round , where . In this round, the server further encodes certain coded mini-subfiles using an MDS code generator matrix. Let be a systematic generator matrix of a MDS code, and let . In Round , the server picks the coded mini-subfiles for every set such that , where and . Those subfiles are encoded with as follows:
(13) |
We refer to the newly obtained coded mini-subfiles as doubly-encoded mini-subfiles. In Round , the server places the following doubly-encoded mini-subfiles in cache ,
Note that, in Round , a total of doubly-encoded mini-subfiles of all the files are kept in each cache.
The overall contents stored in cache in the placement phase is
Each coded mini-subfile has of a file-size. Therefore, the normalized cache memory is
(14) |
The calculation of value is elaborated in Appendix A.
b) Delivery phase: Let be the file demanded by user , where and . During the delivery phase, the server makes a transmission corresponding to every -sized subsets of . The transmission corresponding to a set such that is
Note that, for a given , this delivery phase is the same as the delivery phase in the MKR scheme. There are number of -sized subsets for . The server makes transmission corresponding to each of those subsets. Therefore the number of coded subfiles transmitted in the delivery phase is . Each subfile has of a file-size. Therefore the rate of transmission is .
Next, we show that by employing the above placement and delivery phases, all the users can decode their demanded files. Note that, all the MDS code generator matrices used for encoding the subfiles are known to the users as well. From the coded subfiles placed in the caches, if a user can decode all the subfiles that can be accessed by a corresponding user in the MKR scheme, then the decodability of the demanded files is guaranteed. This is because the delivery phase of our proposed scheme is the same as the delivery phase in the MKR scheme. Thus, we show that user can decode the subfiles such that from the coded subfiles stored in the accessible caches. That will ensure the decodability of the demanded files. First, we show that an arbitrary user can decode if . Then we show that if the user decodes all the subfiles such that , then the user can decode the subfiles with , where . By sequentially applying to , we have the required result that the user can decode all the subfiles with .
c) Decodability: Consider user accessing cache for every . Let us define . Further, consider a -sized set such that . The user has access to the following coded mini-subfiles of the subfile from the caches:
Note that the number of coded mini-subfiles of accessible for the user is
Thus user can decode mini-subfiles of from the coded mini-subfiles (from (12)), and completely retrieve the subfile . That is, user can decode every subfile with , using , alone. Also, note that the user gets all the coded mini-subfiles from (from (12)), since is known to all the users.
Now, we assume that user has decoded all the subfiles with , where . Then, we show that the user can decode the subfiles with . From , user has the doubly encoded mini-subfiles for every . Since, the user decoded all the subfiles with , the user knows all the coded mini-subfiles in . Note that, for a given , we have
Therefore, using and , the user can decode the coded mini-subfiles (from (13)). Now, user has the following coded mini-subfiles of the subfile :
where . Therefore, the number of coded mini-subfiles of with the user is
The appropriate choice of makes sure that all the coded mini-subfiles are distinct. Now, we have
Therefore, the user has
coded mini-subfiles of . Thus, the user can retrieve the subfile . In other words, user can decode all the subfiles such that . Since, this is true for every , the user gets all the subfiles such that from the placement phase. That means, from the coded subfiles placed in the caches, a user can decode all the subfiles that can be accessed by a corresponding user in the MKR scheme. Thus, the decodability of the demanded files is guaranteed.
Finally, we consider the case . The parameter corresponds to (shown in Appendix C). In that case, it is possible to achieve rate by placing the contents by properly employing coding. That is, from the accessible cache contents itself, all the users can decode the entire library of files. The cache placement is as follows. The server divides each file into non-overlapping subfiles of equal size. We have, for all . Let be a generator matrix of a MDS code. For every , the server does the following encoding procedure:
Then the server fills the caches as
Therefore, user gets access to for every . This means, the user has distinct coded subfiles for every and for all . From these coded subfiles, the user can decode all the files (from any coded subfiles, subfiles of a file can be decoded). This completes the proof of Theorem 1.
Remark 4.
The proposed coding scheme depends upon . The parameter is the maximum number of copies of a single subfile available -from the placement phase- for a user in the MKR scheme. When , even though a subfile is stored in different caches, a user gets only a maximum of copies. In our scheme, we ensured that user should be able to decode a subfile using the accessible cache contents, if . The number of rounds and the parameters of the encoding matrices depend upon this . However, the general idea behind the coded placement remains the same irrespective of the value of .
V-B Proof of Theorem 2
In this section, we present a coding scheme (we refer to this scheme as Scheme 2).
First, we show the achievability of the rate at by presenting a coding scheme.
a) Placement phase: The server divides each file into non-overlapping subfiles of equal size. Thus, we have for all . Let be a systematic generator matrix of a MDS code. Thus we can write , where is the identity matrix of size . Then the server encodes the subfiles using , which results in coded subfiles. That is
Then the server fills the caches as follows:
The cache contains linearly independent coded combinations of the subfiles . The size of a coded subfile is the same as the size of a subfile. Thus, we have the total size of a cache, .
b) Delivery phase: Let be the file demanded by user in the delivery phase. Let denote the number of distinct files demanded by the users, i.e., If , then the server simply broadcasts those files. Now, consider the case where . Let denote the number of distinct files demanded by the users who do not access the cache, i.e., . Note that, for all . If , then the server transmits for all such that . That is, the server transmits the subfile of the files demanded by the users who are not accessing cache . If , then the server transmits the subfile of those distinct files (files demanded by the users who do not access cache ) and the subfile of some other files. The same procedure is done for all .
Corresponding to every , the server transmits subfiles. Each subfile has of a file size. Thus the rate achieved is . Now, the claim is that all the users can decode their demanded files completely. When , the decodability is trivial, as the demanded files are sent as such by the server. Let us consider the case where . Consider user who accesses cache for all . The user directly receives for all . Further, user receives the subfile of some files, for every . Also, for all , the user has access to coded subfiles of the subfiles from cache . Thus the user can decode for all and , since any columns of are linearly independent. Therefore, user gets for every . Hence, all the users can decode their demanded files.
Further, if , the rate is trivially achievable by simply broadcasting all the files. Thus, by memory sharing, the rate is achievable for . This completes the proof of Theorem 2.
V-C Proof of Theorem 3
The server has files . Assume that the users are arranged in the lexicographic order of the subsets (of of size ) by which they are indexed. That is, will be the first in the order, will be the second and so on. Consider the set of first caches, where . The number of users who access caches only from this set is . Let us focus on those users. Consider a demand vector in which those users request for files . That is, the first user (the user comes first in the lexicographic ordering among those users) demands , the second user demands , and so on. The remaining users request arbitrary files from , which we are not interested in. Corresponding to , the server makes transmission . From contents in the first caches and transmission , the considered users can collectively decode the first files (one file at each user). Now, consider a demand vector in which those users request for files , and transmission corresponding to . From cache contents and transmission , those users can collectively decode the files . If we consider such demand vectors and the corresponding transmissions, all the files can be decoded at those user end. Thus, we have
(15a) | ||||
(15b) |
Consider an integer . We can expand (15b) as,
(16a) | ||||
(16b) |
where . Using the cache contents and transmissions , the files can be decoded, hence (16b) follows. Let us define, . We can bound the entropy of transmissions by , where each transmission rate is . Thus, we have
(17a) | |||
(17b) |
Now, we find an upper bound on as follows:
(18a) | ||||
(18b) | ||||
(18c) |
By considering any caches from , we can write an inequality similar to the one in (18c).That is,
(19) |
where , and is the contents stored in the caches indexed by the elements in the set . That means we can find such inequalities. Averaging over all those inequalities, we get
(20) |
By applying Lemma 1 for the random variables , we get
Upon rearranging, we have
(21) |
Substituting (21) in (20), we get
Now, consider two cases a) if , then
(23) |
and b) if , then
(24a) | ||||
(24b) | ||||
(24c) | ||||
(24d) |
where (23) and (24d) follow from the fact that the cache contents are the functions of the files. Therefore, we have
(25) |
Therefore, we have the following upper bound on :
(26) |
Now, we find an upper bound on , where .
We consider two cases a) if , then it is possible to decode all the files from and by appropriately choosing the demand vectors . Then the uncertainty in is zero. That is, when , we have .
b) The second case means that, . Note that, is defined such that using the first caches and transmissions, it is possible to decode the remaining files by appropriately choosing the demands of the remaining users, if . That is, means that . Then, we have
(27a) | ||||
(27b) | ||||
(27c) | ||||
(27d) | ||||
(27e) | ||||
(27f) |
From cache contents , and the transmissions it is possible to decode the files , hence (27b) follows. Further, (27e) follows from the fact that given , there is no uncertainty in the transmissions. Thus, we have the upper bound on ,
(28) |
Substituting (26) and (28) in (17b), we get
Upon rearranging the terms, and optimizing over all the possible values of and , we have the following lower bound on
where . This completes the proof of Theorem 3.
V-D Proof of Theorem 4
V-E Proof of Theorem 5
From Theorem 2, we know that for the combinatorial MACC scheme the rate
is achievable when . By substituting and , we get
Thus, we conclude that
for .
V-F Proof of Theorem 6
In this section, we show that the rate-memory trade-off of Scheme 1 is within a multiplicative gap of 21 from the lower bound on in Theorem 3, when and (i.e., ). We consider the cases , , and separately.
Case 1: First assume that, . Then, we have
By substituting and in (8), we get
Therefore, we obtain
(29) |
Case 2: Now, we consider the case . For ( corresponds to ), we have
Therefore, we have
(30) |
for , where .
Now, we consider the case and . By substituting in (8), we get
(31) |
Substituting in (31) yields
Since , we have . Therefore, we get
Therefore, we have
However, we have . Further, in that regime, the function is monotonically increasing in , and we have
Therefore, we get
(32) |
By combining (30) and (32), we get
(33) |
when .
Case 3: Finally, we consider the third case, where . Now, we let and , where and . Then, we have
We choose a value of such that . Then, we get . In that case, we have
(34) |
Now, we assume that . We have the rate . Then, from (34), we obtain
(35) |
Now, we substitute and in (35). Then, we obtain
(36) |
since . Next, we consider the case . Then, we have
(37) | ||||
where (37) follows from memory sharing technique. Using (34), we get
where
Since , we have
Now, let and . Then, we have
(38) | ||||
(39) |
Next, we consider the memory regime . We have
(40) |
We now let and . Then, we have the lower bound from (8)
Therefore, we obtain
However, we have and . Thus
Since , we have
Therefore, we obtain
(41) |
Now, consider the case . We have
In (8), substituting gives
Therefore, we get
Also, we have . Then, we obtain
(42) |
Next consider the region . By substituting in (42), we get
where is a function of and has a maximum value 400 in the region . Therefore, we have
(43) |
Next consider the region . By substituting in (42), we get
where the function has a maximum value 160 in the region . Therefore, we have
(44) |
Next consider the region . By substituting in (42), we get
where the function has a maximum value 51.082 in the region . Therefore, we have
(45) |
Finally, consider the case . Substituting and in (8) yield
Also, we have
Therefore, we get
(46) |
By combining (36),(39),(41),(43),(44),(45),and (46), we get
(47) |
when .
VI Conclusion
In this work, we presented two coding schemes for the combinatorial MACC setting introduced in [30]. Both the presented schemes employ coding in the placement phase in addition to the coded transmissions in the delivery phase. That is, we showed that with the help of a coded placement phase, it is possible to achieve a reduced rate compared to the optimal coded caching scheme under uncoded placement. Finally, we derived an information-theoretic lower bound on the optimal rate-memory trade-off of the combinatorial MACC scheme and showed that the first scheme is optimal at a higher memory regime and the second scheme is optimal when the number of files with the server is no more than the number of users in the system.
Appendix A
In this section, we calculate the normalized cache memory as a continuation of (14). From (14), we have
By expanding and cancelling the terms, we get
Thus, we have
By changing the order of summation, we get
By expanding and cancelling the terms, we obtain
Therefore, we have the required expression in (6) as follows:
Appendix B
In this section, we show that for an corresponds to a , as defined in (6), we can express the rate . From (6), we have
Therefore, we get
We expand the term
where .
We can use the following identity to compute the above sum.
Lemma 2 ([38]).
Let be arbitrary positive integers. If is a positive integer such that . Then
where the summation ranges over all non-negative integers and such that , and .
Appendix C
Appendix D
In this section, we consider the rate-memory trade-off of the combinatorial MACC scheme given by Scheme 1. We consider a general and an , and consider the memory regime , for some positive real numbers and . Notice that, for , we chose and (Case 2 in the proof of Theorem 6). For a and corresponding from (6), we have
(52) |
By substituting in (8), we get
(53) |
Substituting in (53), where (ensures that ), yields
(54) |
(55) |
Now, we separately bound the two terms in (55) as follows:
Since , we have
(56) |
The second term in (55) can be bounded as
(57) |
since . By combining (56) and (57), we get
(58) |
From (58), we can approximately say that the gap grows in the order
(59) |
where and are constants. Also, note that by choosing appropriately, we can bring close to 1 (though not arbitrarily close to 1). Thus, the optimality gap grows is by the (dominant) factor .
References
- [1] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
- [2] Q. Yu, M. A. Maddah-Ali and A. S. Avestimehr, “The exact rate-memory tradeoff for caching with uncoded prefetching,” in 2017 IEEE Int. Symp. Inf. Theory (ISIT), 2017, pp. 1613–1617.
- [3] K. Wan, D. Tuninetti, and P. Piantanida, “An index coding approach to caching with uncoded cache placement,” IEEE Trans. Inf. Theory, vol. 66, no. 3, pp. 1318–1332, Mar. 2020.
- [4] Z. Chen, P. Fan, and K. B. Letaief, “Fundamental limits of caching: Improved bounds for users with small buffers,” IET Commun., vol. 10, no. 17, pp. 2315–2318, Nov. 2016.
- [5] M. M. Amiri and D. Gunduz, “Fundamental limits of coded caching: Improved delivery rate-cache capacity tradeoff,” IEEE Trans. Commun., vol. 65, no. 2, pp. 806–815, Feb. 2017.
- [6] C. Tian and J. Chen, “Caching and delivery via interference elimination,” IEEE Trans. Inf. Theory, vol. 64, no. 3, pp. 1548–1560, Mar. 2018.
- [7] J. Gómez-Vilardebó, “A novel centralized coded caching scheme with coded prefetching,” IEEE J. Sel. Areas Commun., vol. 36, no. 6, pp. 1165–1175, Jun. 2018.
- [8] K. Zhang and C. Tian, “Fundamental limits of coded caching: From uncoded prefetching to coded prefetching,” IEEE J. Sel. Areas Commun., vol. 36, no. 6, pp. 1153–1164, Jun. 2018.
- [9] J. Gómez-Vilardebó, “Fundamental limits of caching: Improved rate-memory trade-off with coded prefetching,” IEEE J. Sel. Areas Commun., vol. 66, no. 10, pp. 4488–4497, Oct. 2018.
- [10] A. Sengupta, R. Tandon, and T. C. Clancy, “Improved approximation of storage-rate tradeoff for caching via new outer bounds,” in 2015 IEEE Int. Symp. Inf. Theory (ISIT), 2015, pp. 1691–1695.
- [11] A. Sengupta and R. Tandon, “Improved approximation of storage-rate tradeoff for caching with multiple demands,” IEEE Trans. Commun., vol. 65, no. 5, pp. 1940–1955, May 2017.
- [12] C.Y. Wang, S. H. Lim, and M. Gastpar, “A new converse bound for coded caching,” in Inf. Theory and Appl. Workshop (ITA), Jan. 2016, pp.1–6.
- [13] C.Y. Wang, S. S. Bidokhti, and M. Wigger, “Improved converses and gap-results for coded caching,” in IEEE Int. Symp. Inf. Theory (ISIT), Jun. 2017, pp. 2428–2432.
- [14] H. Ghasemi and A. Ramamoorthy, “Improved lower bounds for coded caching,” IEEE Trans. Inf. Theory, vol. 63, no. 7, pp. 4388–4413, Jul. 2017.
- [15] J. Hachem, N. Karamchandani, and S. N. Diggavi, “Coded caching for multi-level popularity and access,” IEEE Trans. Inf. Theory, vol. 63, no.5, pp. 3108–3141, May 2017.
- [16] E. Parrinello, A. Ünsal and P. Elia, “Fundamental limits of coded caching with multiple antennas, shared caches and uncoded prefetching,” IEEE Trans. Inf. Theory, vol. 66, no. 4, pp. 2252–2268, Apr. 2020.
- [17] K. S. Reddy and N. Karamchandani, “Rate-memory trade-off for multiaccess coded caching with uncoded placement,” IEEE Trans. Commun., vol. 68, no. 6, pp. 3261–3274, Jun. 2020.
- [18] B. Serbetci, E. Parrinello, and P. Elia, “Multi-access coded caching: gains beyond cache-redundancy,” in IEEE Inf. Theory Workshop (ITW), 2019, pp. 1–5.
- [19] K. S. Reddy and N. Karamchandani, “Structured index coding problems and multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2020, pp. 1–5.
- [20] A. A. Mahesh and B. S. Rajan, “A coded caching scheme with linear sub-packetization and its application to multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2020, pp. 1–5.
- [21] S. Sasi and B. S. Rajan, “Multi-access coded caching scheme with linear sub-packetization using PDAs,” in 2021 IEEE Int. Symp. Inf. Theory (ISIT), 2021, pp. 861–866.
- [22] M. Cheng, K. Wan, D. Liang, M. Zhang, and G. Caire, “A novel transformation approach of shared-link coded caching schemes for multiaccess networks,” IEEE Trans. Commun., vol. 69, no. 11, pp. 7376–7389, Nov. 2021.
- [23] K. K. K. Namboodiri and B. S. Rajan, “Multi-access coded caching with coded placement,” in IEEE Wireless Commun. Netw. Conf. (WCNC), 2022, pp. 2274–2279
- [24] K. K. K. Namboodiri and B. S. Rajan, “Improved lower bounds for multi-access coded caching,” IEEE Trans. Commun., vol. 70, no. 7, pp. 4454–4468, Jul. 2022.
- [25] K. K. K. Namboodiri and B. S. Rajan, “Multi-access coded caching with secure delivery,” in IEEE Inf. Theory Workshop (ITW), 2021, pp. 1–6.
- [26] K. K. K. Namboodiri and B. S. Rajan, “Multi-access coded caching with demand privacy,” in IEEE Wireless Commun. Netw. Conf. (WCNC), 2022, pp. 2280–2285.
- [27] D. Katyal, P. N. Muralidhar, and B. S. Rajan, “Multi-access coded caching schemes from cross resolvable designs,” IEEE Trans. Commun., vol. 69, no. 5, pp. 2997–3010, May 2021.
- [28] P. N. Muralidhar, D. Katyal, and B. S. Rajan, “Improved multi-access coded caching schemes from cross resolvable designs,” in IEEE Inf. Theory Workshop (ITW), 2021, pp. 1–6.
- [29] N. Das and B. S. Rajan, “Multi-access coded caching schemes from maximal cross resolvable designs,” in IEEE Int. Symp. Inf. Theory (ISIT), 2022, pp. 1719–1724.
- [30] P. N. Muralidhar, D. Katyal, and B. S. Rajan, “Maddah-Ali-Niesen scheme for multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2021, pp. 1–6.
- [31] F. Brunero and P. Elia, “Fundamental limits of combinatorial multi-access caching,” IEEE Trans. Inf. Theory, 2022 (Early Access).
- [32] M. Chinnapadamala and B. S. Rajan, “Security and privacy in cache-aided linear function retrieval for multi-access coded caching,” in IEEE Inf. Theory Workshop (ITW), 2022.
- [33] M. Ji, A. M. Tulino, J. Llorca, and G. Caire, “Caching in combination networks,” in 49th Asilomar Conf. Signals Syst. Comput., 2015, pp. 1269–1273.
- [34] A. A. Zewail and A. Yener, “Coded caching for combination networks with cache-aided relays,” in IEEE Int. Symp. Inf. Theory (ISIT), 2017, pp. 2433–2437.
- [35] N. Mital, D. Gündüz, and C. Ling, "Coded caching in a multi-server system with random topology," IEEE Trans. Commun., vol. 68, no. 8, pp. 4620–4631, Aug. 2020.
- [36] F. J. MacWilliams and N. J. A. Sloane, “The theory of error-correcting codes”, Elsevier, 1977.
- [37] T. S. Han, “Nonnegative entropy measures of multivariate symmetric correlations,” Inf. Control, 36(2):133–156, 1978.
- [38] R. Meštrović, “Several generalizations and variations of Chu-Vandermonde identity,” available in arXiv:1807.10604[math.CO], Jul. 2018.
- [39] K. K. K. Namboodiri and B. S. Rajan, “Combinatorial multi-access coded caching: improved rate-memory trade-off with coded placement,” in IEEE Inf. Theory Workshop (ITW), 2023.
K. K. Krishnan Namboodiri (Graduate Student Member, IEEE) was born in Kerala, India. He received the B.Tech. degree in electronics and communication engineering from the National Institute of Technology, Calicut, in 2019. He is currently pursuing the Ph.D. degree with the Department of Electrical Communication Engineering, Indian Institute of Science, Bengaluru. His primary research interests include coded caching, index coding, information theory and wireless communication. He is a recipient of the Prime Minister’s Research Fellowship (2021). |
B. Sundar Rajan (Life Fellow, IEEE) was born in Tamil Nadu, India. He received the B.Sc. degree in mathematics from Madras University, Madras, India, in 1979, the B.Tech. degree in electronics from the Madras Institute of Technology, Madras, in 1982, and the M.Tech. and Ph.D. degrees in electrical engineering from IIT Kanpur, Kanpur, in 1984 and 1989, respectively. He was a Faculty Member at the Department of Electrical Engineering, IIT Delhi, New Delhi, from 1990 to 1997. He has been a Professor with the Department of Electrical Communication Engineering, Indian Institute of Science, Bengaluru, since 1998. His primary research interests include space-time coding for MIMO channels, distributed space-time coding and cooperative communication, coding for multiple-access and relay channels, and network coding. Dr. Rajan is a J. C. Bose National Fellow (2016–2025) and a member of the American Mathematical Society. He is a fellow of the Indian National Academy of Engineering, the Indian National Science Academy, the Indian Academy of Sciences, and the National Academy of Sciences, India. He was a recipient of Prof. Rustum Choksi Award by IISc for Excellence in Research in Engineering in 2009, the IETE Pune Center’s S. V. C. Aiya Award for Telecom Education in 2004, and the Best Academic Paper Award at IEEE WCNC in 2011. He served as the Technical Program Co-Chair for IEEE Information Theory Workshop (ITW’02) held in Bengaluru, in 2002. He was an Editor of IEEE Wireless Communications Letters (2012–2015) and IEEE Transactions on Wireless Communications (2007–2011), and an Associate Editor of Coding Theory for IEEE Transactions on Information Theory (2008–2011 and 2013–2015). |