On the Definition of Malicious
Private Information Retrieval
Abstract
A multi-server private information retrieval (PIR) protocol allows a client to obtain an entry of its choice from a database, held by one or more servers, while hiding the identity of the entry from small enough coalitions of servers. In this paper, we study PIR protocols in which some of the servers are malicious and may not send messages according to the pre-described protocol. In previous papers, such protocols were defined by requiring that they are correct, private, and robust to malicious servers, i.e., by listing 3 properties that they should satisfy. However, 40 years of experience in studying secure multiparty protocols taught us that defining the security of protocols by a list of required properties is problematic.
In this paper, we rectify this situation and define the security of PIR protocols with malicious servers using the real vs. ideal paradigm. We study the relationship between the property-based definition of PIR protocols and the real vs. ideal definition, showing the following results:
-
We prove that if we require full security from PIR protocols, e.g., the client outputs the correct value of the database entry with high probability even if a minority of the servers are malicious, then the two definitions are equivalent. This implies that constructions of such protocols that were proven secure using the property-based definition are actually secure under the “correct” definition of security.
-
We show that if we require security-with-abort from PIR protocols (called PIR protocols with error-detection in previous papers), i.e., protocols in which the user either outputs the correct value or an abort symbol, then there are protocols that are secure under the property-based definition; however, they do not satisfy the real vs. ideal definition, that is, they can be attacked allowing selective abort. This shows that the property-based definition of PIR protocols with security-with-abort is problematic.
-
We consider the compiler of Eriguchi et al. (TCC 22) that starts with a PIR protocol that is secure against semi-honest servers and constructs a PIR protocol with security-with-abort; this compiler implies the best-known PIR protocols with security-with-abort. We show that applying this compiler does not result in PIR protocols that are secure according to the real vs. ideal definition. However, we prove that a simple modification of this compiler results in PIR protocols that are secure according to the real vs. ideal definition.
Keywords and phrases:
Private information retrieval, secure multiparty computationFunding:
Amos Beimel: Partially supported by ISF grant 391/21. Co-funded by the European Union (ERC, NFITSC, 101097959). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.Copyright and License:
2012 ACM Subject Classification:
Security and privacy CryptographyEditor:
Niv GilboaSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
A private information retrieval (PIR) protocol [16] allows a client to obtain an entry of its choice from a database held by one or more servers, such that nothing is revealed to any small enough coalition of servers about the item being revealed. For example, an investor might want to know the value of a specific stock without revealing which stock they are interested in. This is modeled by considering an -bit string held by each server, and the client holding a retrieval index wishing to learn the th entry of , i.e., , without revealing . The trivial solution is to let a single server send the entire database to the client: Chor et al. [16] showed that this is optimal in the information-theoretic setting when there is a single server; however, better protocols are known if the database is held by servers. PIR protocols were used to construct secure multiparty protocols [30, 7], locally decodable codes [34], and inspired constructions of conditional disclosure of secrets [39, 40]. Most constructions of PIR protocols guarantee that any single server will not learn any information on the client’s retrieval index [16, 2, 30, 31, 6, 8, 35, 46, 48, 22, 33, 14, 20, 27, 1], e.g., there is a 3-server PIR protocol secure against a single server with communication complexity [27]. When there are many servers, it is natural to require that the protocol guarantees the privacy of the client against colluding subsets of the servers. Constructions of such PIR protocols are given in [30, 6, 8, 46, 5]. It is also natural to consider malicious servers that may deviate from the protocol. In the stock market example above, one may consider the case where some of the servers try to give the wrong information on the stock. Such PIR protocols were first discussed in [9] and further studied in [47, 46, 28, 18, 49, 43, 44, 37, 4, 23, 24, 25].
The security of PIR protocols in the literature is defined by listing the desired properties from the protocol. However, PIR is a special case of secure multiparty computation (MPC), where it is well-known that separating the security properties is problematic. Indeed, there are MPC protocols that satisfy the natural security properties (such as privacy and correctness) yet should clearly be considered insecure [42, 13, 29]. Instead, the security of MPC protocols is defined using the real vs. ideal world paradigm. Intuitively, we consider an ideal process for computing a function via a trusted party that the adversary cannot corrupt. It is then required that any attack performed in the real world can be simulated in the ideal world, where the adversary is more limited. Different ideal processes capture different security properties, e.g., full security and security-with-abort.
Defining malicious security of PIR protocol via properties is problematic and it is unclear if there are other security properties that can be attacked in such protocols. In this work, we rectify this situation and define the security of PIR protocols using the real vs. ideal world paradigm. Specifically, we are interested in the notions of full security (also known as guaranteed output delivery) and security-with-abort. The former captures the requirement that the client always obtain the “correct” output, and the latter allows the client to abort but never output an “incorrect” value. We ask how such definitions relate to the property-based definitions used so far, and if there are efficient PIR protocols satisfying the real vs. ideal security definition. We consider the perfect setting, the statistical setting, and the computational setting.
1.1 Our Contributions
Our conceptual contribution is defining the ideal processes of the PIR functionality for full security and security-with-abort against malicious adversaries (see Section 3 for the formal definitions). We then compare the simulation-based security definition to the property-based security definition.
1.1.1 The Ideal World Processes
In this section, we describe the ideal world process for each of the desired security notions. A PIR protocol is then considered secure according to an ideal world if any adversary corrupting a subset of the servers can be simulated in the ideal world. We stress that definitions for PIR protocols do not have any privacy requirements from the client; in particular, the client may learn the whole database. Thus, we only consider adversaries that do not corrupt the client.
Full Security of PIR Protocols
We first describe the definition of fully secure PIR protocols. Roughly, the definition captures the requirement that the client outputs the correct value , regardless of the messages sent by the malicious servers. Observe that this only makes sense when the majority of the servers are honest.111If the number of servers is and of them might be corrupted, where , then the client cannot distinguish an execution with the first servers being corrupted and behaving honestly on input and the remaining honest servers holding , from an execution with the last servers being corrupted and behaving honestly on input and the first servers being honest and holding . The ideal world for full security proceeds as follows. The client sends to the trusted party and each honest server sends to the trusted party. The corrupted servers may change their databases (based on the instruction of the ideal-world adversary) and send them to the trusted party. Let denote the database that was sent by more than 1/2 of the servers (note that if there is an honest majority). The trusted party then sends to the client. The ideal functionality is only defined when there is an honest majority.
Secure-With-Abort PIR Protocols
We next describe the definition of secure-with-abort PIR protocols. Here, we allow the client to output (indicating abort). The security definition captures the requirement that the client never outputs . Note that, unlike the ideal world for full security, here the definition allows a majority of corrupted servers.
The ideal-world computation proceeds as follows. The client sends to the trusted party and each honest server sends to the trusted party. The corrupted servers may change their databases (based on the instruction of the ideal-world adversary) and send them to the trusted party. If the trusted party receives the same database from all servers, then it sends to the client; otherwise, it sends to the client.
1.1.2 Comparing Simulation-Based Security to Property-Based Security
We compare the simulation-based security definitions to the property-based security definitions. In the following, we fix the total number of servers to be , and let bound the number of corrupted servers. We refer the reader to Section 4 for the formal statements and proofs of the theorems.
Full Security
We first compare full security with the property-based definition. In addition to correctness and -privacy, here we also consider the notion of -Byzantine robust222Also known as -error-correcting [24]. PIR protocols defined by Beimel and Stahl [9]. Roughly, it requires that the client outputs even if malicious servers arbitrarily deviate from the protocol. Note that similarly to full security, this definition only makes sense when the majority of the servers are honest. As would be expected, full security implies all 3 security properties. We show that the converse also holds.
Theorem 1 (Informal, equivalence of full security and the property-based definition).
Any -server PIR protocol is -fully secure if and only if it is correct, -private, and -Byzantine robust.
Secure-With-Abort
Finally, we consider security-with-abort. Here, instead of -Byzantine robustness, we consider the notion of -error-detecting PIR protocols defined by Eriguchi et al. [23, 24]. In a -error-detecting PIR protocol, the client can abort (i.e., by outputting ) but security requires that no set of malicious servers can force the client to output . Clearly, error-detecting is weaker than Byzantine robust; however, it is possible to construct -error-detecting PIR protocols when [23, 24].
Similarly to the previous definitions, the simulation-based definition implies the property-based definition. However, we show that here the converse is not true.
Theorem 2 (Informal, security-with-abort and the property-based definition).
Any -server PIR protocol that is -secure-with-abort is correct, -private, and -error-detecting. However, the converse does not hold.
To see why the properties do not imply simulation-based security, consider the following protocol, in which a single malicious server can cause a “selective abort”, i.e., the user will abort if and only if . In the protocol, each server sends to the client and additionally, the first server sends an additional bit . An honest server sends . If the client received different databases, or if it has input and it received , then it outputs . Otherwise, the client only receives the database , and it outputs . Clearly, is correct, 1-private, and 1-error-detecting. However, it is not 1-secure-with-abort. This is because the first server can force a different probability for the client to abort on different indices . Specifically, if the server sends , then the client always aborts on index , and it never aborts on index . Note that this holds even though the server does not know anything about . See Example 18 for the formal argument. In Example 19, we show that even if we require the same probability for abort on all indexes , then it is still insufficient to claim security-with-abort.
Colombo et al. [17] noticed that standard definitions for the security of PIR might allow for selective abort (however, they did not provide any concrete example). This motivated them to define authenticated PIR to capture security against such attacks. However, their definition is property-based, and it is unclear whether their definition is equivalent to our simulation-based security, or whether it captures security against attacks not mentioned in their paper.
A Compiler From a Private PIR Protocol to a Secure-With-Abort Protocol
Eriguchi et al. [24] showed a compiler that takes any -server correct and -private PIR protocol, and transforms it into a correct, -private, and -error-detecting -server PIR protocol. In light of Theorem 2, it is natural to ask whether their compiler satisfies the stronger requirement of -security-with-abort. We show that this is not the case; however, we prove that a simple modification to their construction results in a secure-with-abort protocol.
Theorem 3 (Informal, compiler to secure-with-abort).
There exists a compiler that transforms any -server correct and -private PIR protocol into an -server -secure-with-abort PIR protocol. The total communication complexity (i.e., the total number of bits sent) of the new protocol is , where is the communication complexity of .
This result implies that for a constant number of servers, the communication complexity of -secure-with-abort PIR protocols is equivalent to PIR protocols secure against -semi-honest servers (up to a factor of ). We refer to Section 5 for the formal statement and proof. In Table 1 we summarize the PIR protocols obtained from our results.
| server PIR from [5] + [20] | server PIR from [5] + [22] | The server PIR, [46] | |
| -private | |||
|
-security
with-abort |
|||
| -full security |
1.2 Our Techniques
To illustrate our techniques, we next show that -semi-honest security is equivalent to correctness and -privacy. Since simulation-based security is clearly stronger than property-based security, we only show that correctness and -privacy imply -semi-honest security. We do it for statistical security (other cases are handled similarly). This is a special case of a known result that the property-based definition is equivalent to the simulation-based definition for deterministic functionalities with semi-honest security against unbounded adversaries.
Fix a real-world adversary corrupting a set of at most servers. We define its simulator running the client on index , and outputting the queries that correspond to the corrupted servers. We now show that the statistical distance between the real and ideal worlds is negligible. For a retrieval index and a database let denote the output of the client in the real world, and let denote the queries the corrupted server receives. By construction of the simulator and the fact that the client always outputs in the ideal world, we need to show that and are statistically close. First, by the -privacy of the protocol, and are statistically close. Second, observe that correctness implies that and are statistically close. Thus, and are statistically close.
1.3 Related Works
1.3.1 1-Private PIR
We next state the best known PIR protocols. Efremenko [22] constructed a 3-server 1-private PIR protocol with query length and -server 1-private PIR protocols with query length ; the answer length in these protocols is . Dvir and Gopi [20] constructed a PIR protocol with query and answer length . Very recently, Ghasemi, Kopparty, and Sudan [27] constructed a -server PIR protocol with query and answer length . These constructions were simplified by Alon, Beimel, and Lasri [1]. The communication complexity of -server PIR protocols for was improved by Itoh and Suzuki [32, 33], Chee, Feng, Ling, Wang, and Zhang [14], Dvir and Gopi [20], and Ghasemi et al. [27]. For example, there is a 6-server PIR protocol with communication complexity [32, 3, 27, 1]. The best known lower bound on the total communication complexity of 2-server PIR protocols is , proved by Wehner and de Wolf [45] (improving on [41, 34, 36]).
1.3.2 -Private PIR
-private PIR protocols were constructed in [16, 2, 30, 6, 8, 46]. In particular, Woodruff and Yekhanin [46] presented a -private -server PIR construction for general and with communication complexity . Barkol, Ishai, and Weinreb [5] presented a general transformation from 1-private PIR protocols to -private PIR protocol; given a 1-private -server PIR protocol with query length and answer length , they constructed a -private -server PIR protocol with query length and answer length . When is small compared to , this gives better protocols than [46], e.g., a -private -server PIR protocol with communication complexity (using [20]) and a -private -server PIR protocol with query length and answer length (using [22]).
1.3.3 Robust and Byzantine-Robust PIR
Beimel and Stahl [9] introduced robust and Byzantine robust PIR protocols. A PIR protocol is -robust if the client can recover the correct value even if of the servers go offline. The generalized notion of -Byzantine robust requires robustness to hold even if of the servers are malicious. This was further studied in subsequent works [46, 37, 23, 24]. In particular, Eriguchi et al. [24] showed that for a constant number of servers:
-
The communication complexity of perfect -Byzantine robust and -private -server PIR is equivalent to the communication complexity of -private -server PIR protocol;
-
The communication complexity of statistical -Byzantine robust and -private -server PIR is equivalent to -private -server PIR protocol.
Combining the above results and [46] yields a -private and -Byzantine robust -server statistical PIR protocol with total communication . When is relatively small compared to , the results of [5, 20, 22] yield a -private and a -Byzantine robust -server statistical PIR protocol with total communication complexity and a -Byzantine robust -server statistical PIR protocol with total communication complexity . Byzantine robust PIR protocols where the database entries are large were considered in [47, 43, 44, 4]. They measure efficiency compared to the size of the entries rather than the size of the database (as we do in our work).
Eriguchi et al. [23, 24] introduced error-detecting PIR, where the client may abort and it is required that it never outputs the wrong value. In [23], they showed that error-detecting with perfect security and communication that is sublinear in the database size is impossible. In particular, this shows that there is no non-trivial perfect security-with-abort PIR protocol. In [24], they constructed a compiler transforming any -private -server PIR protocol with communication to a -error-detecting one with total communication . When is very small compared to , this has significantly better communication complexity compared to the Byzantine robust protocols discussed above. For example, there is a -error-detecting -server PIR protocol with total communication complexity and a -error-detecting -server PIR protocol with total communication complexity . Additional constructions of error-detecting PIR protocols appear in Eriguchi et al. [25].
Colombo et al. [17] also observed that standard PIR security definitions allow for selective abort attacks. This motivated them to define authenticated PIR protocols, where, similarly to error-detection, it is required that either the client aborts or its output is consistent with the honest server’s database. Additionally, the privacy of the client’s retrieval index is required to hold even if the adversary knows whether the client aborts or not, which prevents selective abort attacks. Unlike our definitions, Colombo et al. [17] defined security by listing the desired security properties.
1.3.4 Computational PIR
Computational PIR protocols consider computationally-bounded servers. This model was first considered by Chor and Gilboa [15]. Computational single-server PIR protocols can be constructed with non-trivial communication complexity [38, 12, 26]. Computational PIR can be seen as a special case of fully homomorphic encryption; the result of Brakerski and Vaikuntanathan [11] used this to construct a single-server PIR protocol with total communication , where is the security parameter.
1.3.5 Spooky PIR
Dwork et al. [21] studied two-round succinct arguments for NP by composing PCP proofs with computational single-server PIR protocols. They showed that such heuristics may be insecure. More generally, they argued that executing PIR protocols in parallel may introduce “spooky interactions”: even though the queries made by the client are independent and the server knows nothing about the indexes of the client, the server may introduce correlations between the queries and outputs of the client. Dodis et al. [19] later showed that this holds even when using multi-prover interactive proofs instead of PCPs. These results show the necessity of defining the security of PIR protocols via the real vs. ideal world paradigm.
1.3.6 Verifiable PIR
Ben-David et al. [10] introduced verifiable PIR, where a (single) server should be able to prove that the database satisfies various properties. Although one of their definition follows the real vs. ideal world paradigm, it does not require privacy or security against selective abort attacks.
2 Preliminaries
2.1 Notations
We use bold characters to denote vectors. For , let . For a set , we write to indicate that is selected uniformly at random from . Given a random variable (or a distribution) , we write to indicate that is selected according to . ppt stands for probabilistic polynomial time. A function is called negligible if for every positive polynomial and all sufficiently large , . For a vector of dimension , we write for its th coordinate, and for we write .
A distribution ensemble is an infinite sequence of random variables indexed by and , where is a domain that might depend on . Computational indistinguishability is defined as follows.
Definition 4.
Let and be two ensembles. We say that and are computationally indistinguishable, denoted , if for every ppt distinguisher , there exists a negligible function , such that for all and ,
Definition 5.
The statistical distance between two finite random variables and is defined as
Two ensembles and are said to be statistically close, denoted , if there exists a negligible function , such that for all and ,
We say that and are identically distributed, denoted , if for all and .
Theorem 6 (Hoeffding’s inequality for the hypergeometric distribution).
Let , , and . Then for every ,
and
2.2 Private Information Retrieval
In this section, we start with the definition of single-round private information retrieval (PIR) protocols [16]. We then define property-based security taken from previous papers [9, 23, 24]. To simplify the presentation, we present the definition for a constant number of servers (i.e., independent of the database size). The definition readily extends to a non-constant number of servers. Intuitively, the PIR protocol starts with the client running a query algorithm and sending the output to the th server . The th server, holding the database and the query , responds with the answer . Finally, the client computes the output by applying the reconstruction algorithm . We next formalize this.
Definition 7 (PIR protocols).
Let . An -server PIR protocol is given by a 3-tuple of algorithms with the following syntax.
-
1.
The query algorithm is a randomized algorithm that is given the size of the database and a retrieval index . It outputs a query for every server and a state . We denote .
-
2.
The answer algorithm is a deterministic algorithm that is given an index , a query , and a database for some . It outputs a response . We denote .
-
3.
The reconstruction algorithm is a deterministic algorithm that is given the size of the database , the answers , and the state . It outputs .
The protocol is efficient if all three algorithms run in polynomial time in . The total communication complexity is the maximum (taken over the randomness of the client and the databases ) of the total number of bits sent in the protocol, i.e., , where is the randomness of .
2.3 Defining Security for PIR via Security Properties
We next define the security properties of PIR protocols as they appear in the literature. We first define correctness, which states that in an honest execution, the client outputs the correct value.
Definition 8 (Correctness).
Let and let be an -server PIR protocol. We say that is statistically correct if for any , any database , and any retrieval index ,
for some negligible function , where the probability is over the randomness of . If then we say that is perfectly correct.
We now define -privacy, stating that no set of servers learns anything about the retrieval index of the client.
Definition 9 (Privacy).
Let , let , and let be an -server PIR protocol. We say that is statistically -private if for any set of servers , and any two sequences of indices and ,
where and . We define computationally/perfectly -privacy by replacing with and , respectively, in the above equation.
We next define security properties that should hold against malicious behavior by a subset of the servers. Following [23, 24], we first define tampering algorithms;333Eriguchi et al. [23, 24] defined tampering functions rather than algorithms. this is an adversary that can modify some of the answers.
Definition 10 (A tampering algorithm [23, 24]).
Let , , and let be an -server PIR protocol. A tampering algorithm for is a randomized algorithm that is given a set , a set of queries, and a database for some . It outputs answers such that for all , and depends on , , and for all .
We now define -error-detecting. Roughly, it means that no set of cheating servers can force the client to output the wrong value. Note, however, that the client is allowed to output (even with probability 1).
Definition 11 (Error-detecting PIR [23, 24]).
Let , , and let be an -server PIR protocol. We say that is statistical -error-detecting if for any , for any database , for any retrieval index , any subset of servers, and any tampering algorithm for ,
for some negligible function , where the probability is over the randomness of and . We define perfect -error-detecting by requiring the above to hold for for all . We define computational -error-detecting by requiring the above to hold only for ppt algorithms .
Finally, we define -Byzantine robustness, which roughly means that the client outputs the correct value (i.e., ) even if at most servers cheat. Note that any Byzantine robust PIR protocol is also correct, by taking .
Definition 12 (Byzantine robust PIR [9]).
Let , let , and let be an -server PIR protocol. We say that is statistical -Byzantine robust if for any , any database , any retrieval index , any subset of servers, and any tampering algorithm for ,
for some negligible function , where the probability is over the randomness of and . We define perfect -Byzantine robust by requiring the above to hold for for all . We define computational -Byzantine robust by requiring the above to hold only for ppt algorithms .
3 Defining Security via The Real vs. Ideal World Paradigm
In this section, we present the security definition for PIR via the real vs. ideal world paradigm. We present the definition for both full security (i.e., with guaranteed output delivery) and for security-with-abort. We first define the real world execution, followed by the ideal worlds for full security and security-with-abort. Then, we present the security definitions. Our main contribution in the definition is defining the appropriate functionalities that are computed in the ideal world.
The Real World
Let and let be an -server PIR protocol. Toward defining security, we first describe an execution of in the presence of an adversary . The adversary controls a subset of the servers. It receives the database and the queries the corrupted servers receive from the client, and instructs each server how to respond. We consider malicious adversaries that can instruct the corrupted servers to respond to the client in an arbitrary way. At the end of the execution, the adversary outputs some function of its view (i.e., its randomness, the database, and the queries it received from the client). For a database size , a database , and a retrieval index , we let and denote the outputs of and the client respectively in an execution of . Finally, let
The Ideal World – Full Security
We next describe an ideal computation with guaranteed output delivery (also referred to as full security) for PIR, where a trusted party performs the computation on behalf of the parties, and the ideal model adversary cannot abort the computation. Note that we only consider adversaries that do not corrupt the client and corrupt a minority of the servers. The input of the adversary is the input of the servers it corrupts, i.e., the database . An ideal computation with database size , on input a database for the servers and retrieval index for the client, in the presence of an adversary (a simulator) controlling a set , proceeds as follows.
- Parties send inputs to the trusted party:
-
Each honest server , i.e., , sends to the trusted party . For each corrupted server , i.e., , the adversary sends to a database of its choice. If the server does not send any database, then sets . The client sends to .
- The trusted party performs the computation:
-
If more than 1/2 of the databases equal some , then the trusted party sends to the client. Otherwise, send to the client.
- Output:
-
The client outputs whatever it received from and the adversary outputs some function of its view (i.e., its random string and the database ).
For a database size , a database , and a retrieval index , let and denote the outputs of and the client respectively in an execution of the above ideal world. Finally, let
Note that the client always outputs if there is an honest majority.
The Ideal World – Security-With-Abort
We next describe an ideal computation with security-with-abort for PIR. Unlike the ideal model for full security, here the adversary can abort the computation. An ideal computation with database size , on input a database and index for the client, in the presence of an adversary (a simulator) controlling a set , proceeds as follows.
- Parties send inputs to the trusted party:
-
Each honest server sends to the trusted party . For each corrupted server , the adversary sends to a database of its choice. If the server does not send any database, then sets . The client sends to .
- The trusted party performs the computation:
-
If the databases that received are not identical, then sends to the client. Otherwise, it sends it .
- Output:
-
The client outputs whatever it received from and the adversary outputs some function of its view (i.e., its random string and the database ).
For a database size , a database , and a retrieval index , let and denote the outputs of and the client respectively in an execution of the above ideal world. Finally, let
Defining Security
Next, we define the security of a PIR protocol via the real vs. ideal world paradigm. We define full security and security-with-abort.
Definition 13 (Security via real vs. ideal paradigm).
Let , , and let be an -server PIR protocol. We say that is statistically -fully secure if for every adversary controlling a set of at most servers, there exists a simulator controlling the same subset in the ideal world, such that
We say that is statistically -secure-with-abort if the above holds with replacing .
Perfect -security is defined by replacing with in the above equations. Computational -security is defined by replacing with and limiting both the real-world adversary and the ideal-world simulator to be ppt algorithms.
Remark 14.
Standard security definitions following the real vs. ideal world paradigm consider adversaries that receive auxiliary information. The same auxiliary information is given to the real-world adversary and its simulator. This is necessary to argue that composition of secure protocols remains secure. If we were to add this to the security definition, then in order to argue that full security is equivalent to the privacy and Byzantine robustness properties (see Theorem 15 below), we would have had to add the auxiliary information to the privacy requirement (the statement and proof of equivalence remain the same). We decided not to include the auxiliary information in the privacy definition since this is not a common definition in the literature.
4 Comparing the Definitions
In this section, we compare the definitions given in the previous section. We start by showing the equivalence between full security and the property-based security definition. Specifically, we show that a PIR protocol is fully secure if and only if it is both private and Byzantine robust (recall that Byzantine robustness implies correctness). The interesting direction is showing the property-based security definition implies the real vs. ideal definition.
Theorem 15.
Let be such that , let be an -server PIR protocol, and let . Then, is -fully secure if and only if it is -private and -Byzantine robust.
Proof.
We prove the results for . The other cases are similar. We first prove that if is both statistically -private and statistically -Byzantine robust, then it is statistically -fully secure. Fix a real-world adversary corrupting a set of at most servers. We define its simulator as follows. First, it sends to the trusted party the database for every corrupted party. It then computes (i.e., queries for the index 1), sends to the adversary, and outputs whatever it outputs. Note that if and are ppt algorithms, then is a ppt algorithm.
We now show that the statistical difference between the real and ideal worlds is negligible. For a retrieval index and a database , let denote the output of the client in the real world. Note that it suffices to show that
| (1) |
where .
Fix an event and let
where the probabilities are over the execution of the real/ideal world. Then
Observe that by the -privacy of ,
Indeed, if this was not the case, then
is non-negligible, where . Therefore,
By the -Byzantine robustness of ,
Thus,
We now prove the second direction. Assume that is statistically -fully secure. We first prove that is statistically -private. Fix a set of size and consider the real-world adversary controlling the servers in that outputs the queries it received from the client. Then by assumption, there exists an ideal-world simulator such that
| (2) |
In particular,
Now, as does not receive any message from the trusted party, it follows that its output is independent of . Thus,
for all and . Therefore,
Privacy follows from the fact that , where .
We now show that is statistically -Byzantine robust. Fix a tampering algorithm and consider the adversary that instructs the corrupted server to respond by computing . Then by Equation 2,
for some negligible function . Since , in the ideal world, the client always outputs . Thus, .
Remark 16.
Note that the above result holds for PIR protocols with more than a single round. To see this, let the simulator run the client on input 1 and simulate the entire execution of the protocol to generate the adversary’s view. The same analysis shows that privacy and Byzantine robustness suffice to argue that the simulator succeeds.
For security-with-abort, we show that the real vs. ideal definition is strictly stronger than the property-based security. The next theorem states that security-with-abort implies the corresponding property-based security.
Theorem 17.
Let , let be an -server PIR protocol, and let . If is -secure-with-abort, then it is correct, -private, and -error-detecting.
Proof.
We assume that since the other cases are handled similarly. For correctness, consider an adversary that does not corrupt any server. Then the output of the client in the ideal world is . Therefore, in the real world, the client outputs except with negligible probability. The proof that is statistically -private is identical to the proof done in Theorem 15 and is therefore omitted. We now show that is statistically -error-detecting. Fix a tampering algorithm and consider the adversary that instructs the corrupted server to respond by computing . Then there is a simulator such that
for all , where is a negligible function. Since in the ideal world the client never outputs , it follows that .
We next show that the converse does not hold: there exists an -server PIR protocol that is correct, 1-private, and 1-error-detecting but is not computationally 1-secure-with-abort. Intuitively, we construct a protocol where a malicious server can force a “selective abort” of the client, that is, the adversary forces the client to abort if and only if it holds certain indices (e.g., ). This is impossible to simulate since the simulation is independent of . We next formalize this intuition.
Example 18.
Let be a perfectly 1-secure-with-abort PIR protocol (e.g., each server sends to the client; if all databases are the same, then the client outputs , else it aborts). Consider the following PIR protocol . The client computes the same queries as in , all servers respond with the same messages, and additionally, the first server sends an additional bit . An honest server sends . If the client has input and it received then it outputs . Otherwise, it proceeds as in . Clearly, is correct, 1-private, and 1-error-detecting.
We next show that is not 1-secure-with-abort. Intuitively, this is because the adversary can force a probability for abort on that is different from the probability on (although each server does not know ). Fix an adversary that corrupts and sends , and assume towards contradiction that there exists a simulator for it. Consider an execution of with (i.e., with probability 1/2 and with probability 1/2) and . Then in the real world,
Let us analyze the ideal world. Let denote the probability that sends to the trusted party (and thus the client outputs 1). Since in the ideal world the simulator is independent of ,
and
Clearly, it cannot be the case where both values are 1/2 as in the real world, thus the two worlds can be easily distinguished.
One might expect that the reason the above example worked is because the client aborts with different probabilities for different inputs. We next show that adding such a requirement is still insufficient to argue for simulation-based security. Roughly, this is done by letting the client send to the first server a random index that will inform the server whether the client will abort on (if the additional bit from the previous example is 1). Thus, in the real world, the adversary can choose for which inputs the client will abort, which cannot be simulated in the ideal world. We next formalize this.
Example 19.
Let be a perfectly 1-secure-with-abort PIR protocol (e.g., the servers send to the client who then proceeds like the trusted party). Consider the following PIR protocol . The client computes the same queries as in and appends the first query a random sampled uniformly at random. All servers respond with the same messages, and additionally, the first server sends an additional bit . An honest server sends . If the client has input and it received , then it outputs . Otherwise, it proceeds as in . Clearly, is correct, 1-private, 1-error-detecting, and the probability the client aborts is if the first server cheats (and 0 otherwise). The formal argument that is not secure-with-abort is similar to the previous example and is therefore omitted.
5 A Generic Transformation From Private PIR to Secure-With-Abort PIR
In this section, we show a generic transformation of a private PIR protocol to a secure-with-abort protocol, based on Eriguchi et al. [24]. The original construction of [24] is insecure according to Definition 13 (see Remark 26 below). However, we prove that a modification of the construction is secure. We prove the following.
Theorem 20.
Let , , and . There exists a compiler that transforms any statistically correct and -private -server PIR protocol to a -secure-with-abort protocol. Moreover, if the total communication complexity of is , then the new protocol has total communication complexity .
Proof.
We prove the result for . The case where follows from the fact that the simulator we construct is efficient (if the adversary is) and from simple hybrid arguments. Let be a statistically correct and statistically -private PIR protocol. The idea of the transformation is the following. First, the client generates the queries as in . Then, with probability 1/2, the client samples two indices at random such that , and sends to the th server for every , and sends to server (i.e., query is a duplicate query). With probability 1/2, the client sends to for every (without duplicating any of them). We refer to the first kind of execution, where the client duplicated a query, as a test execution, and we refer to the second kind of execution as a real execution. The th server, upon receiving , responds as does in , and upon receiving , responds as does in .
Notice that if in a test execution, is honest, then the view of the adversary in this case is identical to its view in a real execution. Thus, if the adversary cheats in the test execution, then it cheats in the real execution.
We then let the parties repeat this process in parallel sufficiently many times. The output of the client is defined as follows. If in one of the test executions, the client receives from the servers and different answers, then one of them is malicious and so the client aborts. Otherwise, the client computes the output of all of the real executions and outputs the majority. Intuitively, if we repeat this sufficiently many times, then the adversary cannot distinguish the real executions from most of the test executions. Therefore, if it cheats on too many real executions, then the client will catch it in one of the test executions. Otherwise, it will cheat on a minority of the real executions; hence, the correctness of the underlying PIR protocol implies that the client will compute the correct value most of the time.
For convenience, instead of deciding at random and independently which execution is a test and which is a real execution, we let the client randomly sample exactly half of the executions to be real.444Note that if is perfectly correct, our protocol achieves only statistical correctness since it could be the case where the client duplicates a query in every execution of . We next formalize this.
.
Protocol 21 ().
Inputs: The client holds the database length and a retrieval index . Each server holds a database .
Let be an integer such that is odd to be determined by the analysis below.
- The query algorithm:
-
The client samples a set uniformly at random and computes independently. It then does the following for all .
-
1.
Sample a pair of distinct elements independently and uniformly at random.
-
2.
If then set for all . Otherwise, for every set , and set .
Let . The client sends to the th server.
-
1.
- The answer algorithm:
-
For every and , if server receives from the client, it computes . Else, it receives and computes . It sends to the client.
- The reconstruction algorithm:
-
The client computes for all . Output if there exists such that and . Otherwise, the client outputs .
.
We now show the security of . Fix an adversary corrupting a set of at most servers. We define its simulator whose input is as follows. Sample and send to . For every let denote the response made for server , and for every let . Write . If there exists such that , then send to the trusted party a different database for the corrupted servers. Otherwise, send as the input of every corrupted server. Finally, output whatever outputs and halt. We show that
| (3) |
To prove this, we first prove the following lemma, stating that cannot force the client to output with noticeable probability.
Lemma 22.
Let be the event that for every (i.e., the client did not output ), and let denote the event that (i.e., the client output the wrong value). Then
| (4) |
where the probability is over the random coins of the client and the adversary.
Proof.
We prove that the bound holds for any fixed randomness of the adversary. In the following, we assume the randomness of and the queries of the client before duplication (i.e., ) are fixed. Observe that for every , the adversary cannot distinguish the case where and the case where and . Therefore, if it instructs a server to cheat in one case, then it will instruct it to cheat in the other case.
To this end, we say that a server cheated in execution if its answer is different from . That is, if it were to receive its original query before duplicating, then it would send a different answer. Let
i.e., denotes the set of all executions where the adversary cheated. As we fixed the randomness of the adversary and , the set is properly defined. We show that if is too large, then the client will catch the adversary with overwhelming probability in one of the test executions, and if it is too small, then it is unlikely that the majority of the outputs computed for the real execution will be incorrect. We separate the proof into two cases.
We first analyze the probability in (4) assuming . We show that it is unlikely that the client outputs the wrong value. That is, we show occurs with low probability. Observe that the expectation of is at most . By Hoeffding’s inequality (Theorem 6), it follows that
Let , i.e., the th execution is an honest real execution. By the statistical correctness of the original PIR protocol, the probability that the client outputs in the th execution of the protocol is negligible. Therefore,
We now analyze the probability in (4) assuming . We show that in this case, the client will catch the adversary cheating with high probability. Since the expectation of is at least , by Hoeffding’s inequality, it follows that
Next, for every let be the event that and cheated in execution (that is, it is the event the adversary got caught cheating in the th execution). Fix an honest server , and for every fix a server that cheated execution . Observe that for every the client will catch the adversary with probability at least
Now, observe that for every , if then either the adversary did not cheat on the th execution or did not occur. Therefore,
We conclude that
We now use Lemma 22 to show that Equation 3 holds. Specifically, we show that conditioned that the event does not occur, the two ensembles are statistically close. First, note that by the -privacy of , the queries that receives in the real world are statistically close to the queries it receives from in the ideal world. Therefore, its responses are statistically close. Now, if the client aborts in the real world, then there exists a test execution where it gets different responses. Thus, the same holds in the ideal world (except with negligible probability), hence the simulator changes its database. This means that in both worlds, the client outputs . Otherwise, since we condition on not occurring, the client outputs in both worlds. Finally, setting implies that occurs with negligible probability, which completes the proof.
Using the -private -server PIR protocol of Woodruff and Yekhanin [46] we obtain the following.
Corollary 23.
For every and there exists a -secure-with-abort -server PIR protocol with communication complexity
Applying Theorem 20 to the protocol resulting from the compiler of Barkol et al. [5] applied to the 2-server protocol of Dvir and Gopi [20] yields the following.
Corollary 24.
For every , there exists a -secure-with-abort -server PIR protocol with communication complexity
Corollary 25.
For every , there exists a -secure-with-abort -server PIR protocol with communication complexity
Remark 26 (On the insecurity of the compiler of [24]).
In the original construction of Eriguchi et al. [24], instead of computing , the client also tests whether all of these values are equal. If not, it outputs , and if they are equal, the client outputs this value. Although their protocol is correct, -private, and -error-detecting, it is not -secure-with-abort. Intuitively, this is because in the underlying protocol the adversary can cause the client to output an incorrect value for different inputs (recall that is not guaranteed to satisfy any robustness property). This causes the client to abort only on certain inputs; thus it has the same selective abort issue presented in Examples 18 and 19. We next give details.
Consider the protocol , where each server sends to the client, and the client computes the output as follows. If then output the th entry of the database sent by , and otherwise, output the th entry of the database sent by . Clearly this protocol is correct and private. However, the compiled protocol of Eriguchi et al. [24] is not 1-secure-with-abort since an adversary corrupting can force the client to output an incorrect value only on . Therefore, the client will abort only when .
References
- [1] Bar Alon, Amos Beimel, and Or Lasri. Simplified PIR and CDS protocols and improved linear secret-sharing schemes. Cryptology ePrint Archive, Paper 2024/1599, 2024. URL: https://eprint.iacr.org/2024/1599.
- [2] Andris Ambainis. Upper bound on the communication complexity of private information retrieval. In P. Degano, R. Gorrieri, and A. Marchetti-Spaccamela, editors, Proc. of the 24th International Colloquium on Automata, Languages and Programming, volume 1256 of Lecture Notes in Computer Science, pages 401–407, 1997. doi:10.1007/3-540-63165-8_196.
- [3] Hadar Amran. Constructing multi-servers private information retrieval protocols. Master’s thesis, Ben-Gurion University, 2016.
- [4] Karim A. Banawan and Sennur Ulukus. The capacity of private information retrieval from byzantine and colluding databases. IEEE Trans. Inf. Theory, 65(2):1206–1219, 2019. doi:10.1109/TIT.2018.2869154.
- [5] Omer Barkol, Yuval Ishai, and Enav Weinreb. On locally decodable codes, self-correctable codes, and t -private PIR. In Moses Charikar, Klaus Jansen, Omer Reingold, and José D. P. Rolim, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 10th International Workshop, APPROX 2007, and 11th International Workshop, RANDOM 2007, volume 4627 of Lecture Notes in Computer Science, pages 311–325. Springer, 2007. doi:10.1007/978-3-540-74208-1_23.
- [6] Amos Beimel and Yuval Ishai. Information-theoretic private information retrieval: A unified construction. In Fernando Orejas, Paul G. Spirakis, and Jan van Leeuwen, editors, Automata, Languages and Programming, 28th International Colloquium, ICALP 2001, Crete, Greece, July 8-12, 2001, Proceedings, volume 2076 of Lecture Notes in Computer Science, pages 912–926. Springer, 2001. doi:10.1007/3-540-48224-5_74.
- [7] Amos Beimel, Yuval Ishai, Ranjit Kumaresan, and Eyal Kushilevitz. On the cryptographic complexity of the worst functions. In Yehuda Lindell, editor, TCC 2014, volume 8349 of Lecture Notes in Computer Science, pages 317–342. Springer, 2014. doi:10.1007/978-3-642-54242-8_14.
- [8] Amos Beimel, Yuval Ishai, and Eyal Kushilevitz. General constructions for information-theoretic private information retrieval. Journal of Computer and System Sciences, 71(2):213–247, 2005. doi:10.1016/j.jcss.2005.03.002.
- [9] Amos Beimel and Yoav Stahl. Robust information-theoretic private information retrieval. J. Cryptol., 20(3):295–321, 2007. doi:10.1007/S00145-007-0424-2.
- [10] Shany Ben-David, Yael Tauman Kalai, and Omer Paneth. Verifiable private information retrieval. In Eike Kiltz and Vinod Vaikuntanathan, editors, Theory of Cryptography – 20th International Conference, TCC 2022, Part III, volume 13749 of Lecture Notes in Computer Science, pages 3–32. Springer, 2022. doi:10.1007/978-3-031-22368-6_1.
- [11] Zvika Brakerski and Vinod Vaikuntanathan. Efficient fully homomorphic encryption from (standard) LWE. In Rafail Ostrovsky, editor, IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, pages 97–106. IEEE Computer Society, 2011. doi:10.1109/FOCS.2011.12.
- [12] Christian Cachin, Silvio Micali, and Markus Stadler. Computationally private information retrieval with polylogarithmic communication. In Jacques Stern, editor, Advances in Cryptology – EUROCRYPT ’99. Springer, 1999. doi:10.1007/3-540-48910-X_28.
- [13] Ran Canetti. Security and composition of multiparty cryptographic protocols. Journal of Cryptology, 13(1):143–202, 2000. doi:10.1007/S001459910006.
- [14] Yeow Meng Chee, Tao Feng, San Ling, Huaxiong Wang, and Liang Feng Zhang. Query-efficient locally decodable codes of subexponential length. Computational Complexity, 22(1):159–189, 2013. doi:10.1007/s00037-011-0017-1.
- [15] Benny Chor and Niv Gilboa. Computationally private information retrieval (extended abstract). In Frank Thomson Leighton and Peter W. Shor, editors, Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing, pages 304–313. ACM, 1997. doi:10.1145/258533.258609.
- [16] Benny Chor, Oded Goldreich, Eyal Kushilevitz, and Madhu Sudan. Private information retrieval. In 36th Annual Symposium on Foundations of Computer Science, pages 41–50. IEEE Computer Society, 1995. doi:10.1109/SFCS.1995.492461.
- [17] Simone Colombo, Kirill Nikitin, Henry Corrigan-Gibbs, David J. Wu, and Bryan Ford. Authenticated private information retrieval. In Joseph A. Calandrino and Carmela Troncoso, editors, 32nd USENIX Security Symposium, USENIX Security 2023, pages 3835–3851. USENIX Association, 2023. URL: https://www.usenix.org/conference/usenixsecurity23/presentation/colombo.
- [18] Casey Devet, Ian Goldberg, and Nadia Heninger. Optimally robust private information retrieval. In Tadayoshi Kohno, editor, Proceedings of the 21th USENIX Security Symposium, pages 269–283. USENIX Association, 2012. URL: https://www.usenix.org/conference/usenixsecurity12/technical-sessions/presentation/devet.
- [19] Yevgeniy Dodis, Shai Halevi, Ron D. Rothblum, and Daniel Wichs. Spooky encryption and its applications. In Matthew Robshaw and Jonathan Katz, editors, Advances in Cryptology – CRYPTO 2016 – 36th Annual International Cryptology Conference, Part III, volume 9816 of Lecture Notes in Computer Science, pages 93–122. Springer, 2016. doi:10.1007/978-3-662-53015-3_4.
- [20] Zeev Dvir and Sivakanth Gopi. 2-server PIR with subpolynomial communication. J. ACM, 63(4):39:1–39:15, 2016. doi:10.1145/2968443.
- [21] Cynthia Dwork, Michael Langberg, Moni Naor, Kobbi Nissim, and Omer Reingold. Succinct proofs for np and spooky interactions. Unpublished manuscript, https://www.wisdom.weizmann.ac.il/˜naor/PAPERS/spooky.pdf, 2004.
- [22] Klim Efremenko. 3-query locally decodable codes of subexponential length. SIAM J. Comput., 41(6):1694–1703, 2012. doi:10.1137/090772721.
- [23] Reo Eriguchi, Kaoru Kurosawa, and Koji Nuida. Multi-server PIR with full error detection and limited error correction. In Dana Dachman-Soled, editor, 3rd Conference on Information-Theoretic Cryptography, ITC 2022, volume 230 of LIPIcs, pages 1:1–1:20. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.ITC.2022.1.
- [24] Reo Eriguchi, Kaoru Kurosawa, and Koji Nuida. On the optimal communication complexity of error-correcting multi-server PIR. In Eike Kiltz and Vinod Vaikuntanathan, editors, Theory of Cryptography – 20th International Conference, TCC 2022, Part III, volume 13749 of Lecture Notes in Computer Science, pages 60–88. Springer, 2022. doi:10.1007/978-3-031-22368-6_3.
- [25] Reo Eriguchi, Kaoru Kurosawa, and Koji Nuida. Efficient and generic methods to achieve active security in private information retrieval and more advanced database search. In Marc Joye and Gregor Leander, editors, Advances in Cryptology – EUROCRYPT 2024, Part V, volume 14655 of Lecture Notes in Computer Science, pages 92–121. Springer, 2024. doi:10.1007/978-3-031-58740-5_4.
- [26] Craig Gentry and Zulfikar Ramzan. Single-database private information retrieval with constant communication rate. In Luís Caires, Giuseppe F. Italiano, Luís Monteiro, Catuscia Palamidessi, and Moti Yung, editors, Automata, Languages and Programming, 32nd International Colloquium, ICALP 2005, volume 3580 of Lecture Notes in Computer Science, pages 803–815. Springer, 2005. doi:10.1007/11523468_65.
- [27] Fatemeh Ghasemi, Swastik Kopparty, and Madhu Sudan. Improved pir schemes using matching vectors and derivatives. Technical Report 2411.11611, Cryptology ePrint Archive, 2024. doi:10.48550/arXiv.2411.11611.
- [28] Ian Goldberg. Improving the robustness of private information retrieval. In 2007 IEEE Symposium on Security and Privacy (S&P 2007), 20-23, pages 131–148. IEEE Computer Society, 2007. doi:10.1109/SP.2007.23.
- [29] Oded Goldreich. Foundations of Cryptography – VOLUME 2: Basic Applications. Cambridge University Press, 2004. doi:10.1017/CBO9780511721656.
- [30] Yuval Ishai and Eyal Kushilevitz. Improved upper bounds on information-theoretic private information retrieval (extended abstract). In Jeffrey Scott Vitter, Lawrence L. Larmore, and Frank Thomson Leighton, editors, Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, pages 79–88. ACM, 1999. doi:10.1145/301250.301275.
- [31] Toshiya Itoh. Efficient private information retrieval. IEICE Trans. Fundamentals of Electronics, Communications and Computer Sciences, E82-A(1):11–20, 1999.
- [32] Toshiya Itoh and Yasuhiro Suzuki. New constructions for query-efficient locally decodable codes of subexponential length. CoRR, abs/0810.4576, 2008. arXiv:0810.4576.
- [33] Toshiya Itoh and Yasuhiro Suzuki. Improved constructions for query-efficient locally decodable codes of subexponential length. IEICE Transactions on Information and Systems, E93-D(2):263–270, 2010. doi:10.1587/transinf.e93.d.263.
- [34] Jonathan Katz and Luca Trevisan. On the efficiency of local decoding procedures for error-correcting codes. In F. Frances Yao and Eugene M. Luks, editors, Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, pages 80–86. ACM, 2000. doi:10.1145/335305.335315.
- [35] Kiran S. Kedlaya and Sergey Yekhanin. Locally decodable codes from nice subsets of finite fields and prime factors of Mersenne numbers. Technical Report TR07-040, Electronic Colloquium on Computational Complexity, 2007. URL: https://eccc.weizmann.ac.il/report/2007/040/.
- [36] Iordanis Kerenidis and Ronald de Wolf. Exponential lower bound for 2-query locally decodable codes via a quantum argument. Journal of Computer and System Sciences, 69(3):395–420, 2004. doi:10.1016/J.JCSS.2004.04.007.
- [37] Kaoru Kurosawa. How to correct errors in multi-server PIR. In Steven D. Galbraith and Shiho Moriai, editors, Advances in Cryptology – ASIACRYPT 2019, Part II, volume 11922 of Lecture Notes in Computer Science, pages 564–574. Springer, 2019. doi:10.1007/978-3-030-34621-8_20.
- [38] Eyal Kushilevitz and Rafail Ostrovsky. Replication is NOT needed: SINGLE database, computationally-private information retrieval. In 38th Annual Symposium on Foundations of Computer Science, FOCS ’97, 1997, pages 364–373. IEEE Computer Society, 1997. doi:10.1109/SFCS.1997.646125.
- [39] Tianren Liu, Vinod Vaikuntanathan, and Hoeteck Wee. Conditional disclosure of secrets via non-linear reconstruction. In Jonathan Katz and Hovav Shacham, editors, Advances in Cryptology – CRYPTO 2017, Part I, volume 10401 of Lecture Notes in Computer Science, pages 758–790. Springer, 2017. doi:10.1007/978-3-319-63688-7_25.
- [40] Tianren Liu, Vinod Vaikuntanathan, and Hoeteck Wee. Towards breaking the exponential barrier for general secret sharing. In EUROCRYPT 2018, volume 10820 of LNCS, pages 567–596, 2018. doi:10.1007/978-3-319-78381-9_21.
- [41] Eran Mann. Private access to distributed information. Master’s thesis, Technion – Israel Institute of Technology, Haifa, 1998.
- [42] Silvio Micali and Phillip Rogaway. Secure computation (abstract). In Joan Feigenbaum, editor, Advances in Cryptology – CRYPTO ’91, 11th Annual International Cryptology Conference, volume 576 of Lecture Notes in Computer Science, pages 392–404. Springer, 1991. doi:10.1007/3-540-46766-1_32.
- [43] Hua Sun and Syed Ali Jafar. The capacity of private information retrieval. IEEE Trans. Inf. Theory, 63(7):4075–4088, 2017. doi:10.1109/TIT.2017.2689028.
- [44] Hua Sun and Syed Ali Jafar. The capacity of robust private information retrieval with colluding databases. IEEE Trans. Inf. Theory, 64(4):2361–2370, 2018. doi:10.1109/TIT.2017.2777490.
- [45] Stephanie Wehner and Ronald de Wolf. Improved lower bounds for locally decodable codes and private information retrieval. In L. Caires, G. F. Italiano, L. Monteiro, C. Palamidessi, and M. Yung, editors, Proc. of the 32nd International Colloquium on Automata, Languages and Programming, volume 3580 of Lecture Notes in Computer Science, pages 1424–1436, 2005. doi:10.1007/11523468_115.
- [46] David P. Woodruff and Sergey Yekhanin. A geometric approach to information-theoretic private information retrieval. In 20th Annual IEEE Conference on Computational Complexity (CCC 2005), pages 275–284. IEEE Computer Society, 2005. doi:10.1109/CCC.2005.2.
- [47] E.Y. Yang, Jie Xu, and K.H. Bennett. Private information retrieval in the presence of malicious failures. In Proceedings 26th Annual International Computer Software and Applications, pages 805–810, 2002. doi:10.1109/CMPSAC.2002.1045104.
- [48] Sergey Yekhanin. Towards 3-query locally decodable codes of subexponential length. In Proceedings of the 39th Annual ACM Symposium on Theory of Computing (STOC), pages 266–274, 2007. doi:10.1145/1250790.1250830.
- [49] Liang Feng Zhang and Reihaneh Safavi-Naini. Verifiable multi-server private information retrieval. In Ioana Boureanu, Philippe Owesarski, and Serge Vaudenay, editors, Applied Cryptography and Network Security – 12th International Conference, ACNS 2014, Lausanne, Switzerland, volume 8479 of Lecture Notes in Computer Science, pages 62–79. Springer, 2014. doi:10.1007/978-3-319-07536-5_5.
