1 AoNT from a Protocol that Constructs PM<Φk> from IMD<Φ, n, d>
Tải bản đầy đủ - 0trang
Memory Erasability Ampliﬁcation
121
Fig. 6. The algorithm C2A that realizes a (Φ, n, d, k)-AoNT from a converter π, where
π constructs PM Φk from IMD Φ, n, d .
5.2
Perfectly Secure AoNT Based on Matrices with Ramp
Minimum Distance
This subsection shows how one can improve the standard realization of AoNTs
based on linear block codes of Canetti et al. [3] by using our novel concept of
ramp minimum distance.
The Standard Realization. Let G be the k × n generator matrix with elements
in GF(q) of a linear block code with minimum distance d. The encoding function
of the perfectly secure (GF(q), (n + k), d, k)-AoNT is as follows:
aenc(a ∈ GF(q)k ) : b ←$ GF(q)n ; y ←
In 0
G Ik
b
; return y .
a
Further details are given in the full paper.
Let us now show how to use the concept of ramp minimal distance to construct better AoNTs.
Definition 5.2. A k × n matrix G with elements in GF(q) has ramp minimum
distance d if for every r ∈ {1, . . . , k}, every r × (n − (d − r)) submatrix of G has
rank r.
Note that the concept of (regular) minimum distance comes from coding theory,
and requires that all k × (n − (d − 1)) sub-matrices of G have rank k (which
is equivalent to saying that for every r ∈ {1, . . . , k}, all r × (n − (d − 1)) submatrices of G have rank r), where G is the generator matrix of a linear block
code. A matrix with minimum distance d also has a ramp minimum distance d
(the converse is obviously not true).
122
J. Camenisch et al.
Now for the generator matrix with ramp minimum distance, we can construct
an AoNT and thus obtain the following theorem, the proof of which is found in
the full version of this paper.
Theorem 5.3. The standard realization of an AoNT (sketched above and
detailed in the full paper), parametrized by a k × n matrix G with elements in
GF(q) with ramp (instead of regular) minimum distance d, is a perfectly secure
(GF(q), (n + k), d, k)-AoNT.
It remains to ﬁnd a matrix with a desired ramp minimum distance. One way
is to chose a random matrix, as shown by the following theorem that we prove
in the full paper.
Theorem 5.4. For all (n, k, d) ∈ N3 , and all prime powers q, a k × n matrix
where all elements were chosen independently and uniformly at random over
GF(q), has ramp minimum distance d with probability at least
k
1−
i=1
where Hq (x) :=
d−i
k
(q − 1)i q (Hq ( n )−1)n ,
i
0 if x = 0 or x = 1;
x logq (q−1) − x logq (x) − (1−x) logq (1−x) if 0 < x < 1.
Unfortunately, we do not know of any eﬃcient method to check whether a random matrix has a given ramp minimum distance. For practical parameters, however, it is feasible to generate and test such matrices with small values of k and
d (e.g., less than 20).
Better AoNTs Using our Realization. Given a ﬁxed size, it is sometimes possible
to ﬁnd matrices with a given ramp minimum distance but no matrix with the
same (regular) minimum distance. Hence AoNTs based on matrices with a ramp
minimum distance can achieve better parameters than previously known realizations. We now illustrate this fact with a numerical example. Let us determine
the best message length k that a perfect AoNT with ﬁxed parameters n = 30,
d = 12, and q = 2 can achieve with both our realization and the standard realization. Both realizations will require a matrix with (30 − k) rows and (ramp
or regular, respectively) minimum distance d = 12. First, observe that there
exists a 6 × 24 matrix over GF(2) with ramp minimum distance 12 (see the full
paper). Hence using our realization, we can achieve k = 6. Plotkin [16] showed
that a binary code with block length 2d and distance d can have at most 4d
codewords. Hence there cannot exist a 6 × 24 matrix with (regular) minimum
distance d = 12 (as it would generate a code with 26 = 64 codewords, which
is more than 4d = 48). The best AoNT one can hope for using the standard
realization thus has k = 5.
Statistical Security. Theorem 5.4 stated that by choosing a random generator
matrix, one can achieve a certain ramp minimum distance with a certain probability (1 − ). If one uses our realization, but without checking that the matrix
Memory Erasability Ampliﬁcation
123
actually has the required ramp minimum distance, then the resulting AoNT will
be perfectly secure with probability (1 − ). (Note that this is diﬀerent from
saying that the AoNT is -secure, as the randomness used to generate the AoNT
is not part of the distinguishing experiment.) In practice, one can make very
small, e.g., < 2−η , and it might be acceptable to chose a random matrix and
not check its properties to realize an AoNT.
5.3
Realizing a Perfectly Secure AoNT over a Small Field by
Combining AoNTs
Designing perfectly-secure AoNTs over very small ﬁelds, e.g., GF(2), is hard.
The previous realization does not scale well to large message lengths k and large
privacy thresholds d; and realizations based on Shamir’s secret sharing scheme
are always over large ﬁelds—using such a (GF(2a ), n, d, k)-AoNT unmodiﬁed
over GF(2) instead would result in a (GF(2), an, d, ak)-AoNT with a poor privacy threshold d. The leakage of any GF(2) element means that the entire original GF(2a ) element is compromised. We now show how to combine the two
approaches to realize a perfectly secure AoNT over a small ﬁeld but with large
k and d.
Our realization requires two AoNTs, a “ﬁne-grained” one and a “coarsegrained” one, operating over a small ﬁeld S and a large ﬁeld L, respectively.
We require that the number of elements of L be a power of that of S and that
k s = log(|L|)/ log(|S|) be true. We need to interpret a string of k k s elements
from S as a string of k elements of L, an operation we denote by S L. The
converse operation is denoted L S.
The encoding function of our combined AoNT then works as follows. One
ﬁrst applies the coarse-grained AoNT to the whole data vector and then applies
the ﬁne-grained AoNT to each element of the result:
aenc(a ∈ S k
s
k
):
x ← aenc (S L(a)); ∀j ∈ {1, . . . , n } : b[j] ←$ aencs (L S(x [j])); return b.
$
It’s easy to see how the decoding function adec of the combined AoNT works
and it is thus omitted. We have the following theorem, the proof of which is
found in the full version of this paper.
Theorem 5.5. Given a perfectly secure (S, ns , ds , k s )-AoNT (aencs , adecs )
and a perfectly secure (L, n , d , k )-AoNT (aenc , adec ) such that k s =
log(|L|)/ log(|S|), the AoNT (aenc, adec) described above is a perfectly secure
(S, ns n , (ds + 1)(d + 1) − 1, k s k )-AoNT.
Numerical Example. Let us suppose that we are interested in a perfect AoNT
that operates over S = GF(2) and that can store a cryptographic key of size
k = 256 bits using at most n = 8192 bits (a kilobyte) of memory.
If we use a (GF(210 ), 819, 793, 26)-AoNT built according to Franklin and
Yung [7] unmodiﬁed over the ﬁeld GF(2), we get a (GF(2), 8190, 793, 260)-AoNT.
This AoNT has a privacy threshold d of only 793 bits.
124
J. Camenisch et al.
By combining a (GF(2), 32, 11, 8)-AoNT (which can be found by exhaustive search) with a (GF(28 ), 255, 223, 32)-AoNT built according to Franklin and
Yung [7], one gets a (GF(2), 8160, 2687, 256)-AoNT. This AoNT has a much better privacy threshold d of 2687, i.e., 2687 arbitrary bits may leak to the adversary.
5.4
Computationally Secure AoNT over a Large Field from a PRG
We now present a realization of a computationally secure AoNTs over a large
ﬁeld GF(2η ), where η is the security parameter. Our realization is optimal in
the sense that it achieves both an optimal message length k = n − 1 (thus an
optimal rate (n − 1)/n) and an optimal privacy threshold d = n − 1. That is, the
AoNT needs just a single additional element to encode a message and remains
private even if the adversary obtains all but any one element.
Definition 5.6. An -PRG where the output length is a multiple of the input
length, i.e., prg : GF(2η ) → GF(2η ) (η)/η , is KD-secure, if for all i =
1, . . . , (η)/η, these ensembles are computationally indistinguishable:
$
GF(2η ), x ← prg(sk ), and
– {(x1 , . . . , xi−1 , xi , xi+1 , . . . , x (η)/η )}1η where sk ←
xi ← xi + sk .
$
GF(2η ) (η)/η .
– {x}1η where x ←
Note that this property is somewhat reminiscent of the KDM-CCA2 security of
encryption functions [2].
Our realization, somewhat reminiscent of the OAEP realization of Canetti
et al. [3], is as follows:
aenc(m ∈ GF(2η )
adec(y ||z) :
(η)/η
):
sk ←$ GF(2η ); x ← prg(sk ); y ← x + m;
(η)/η
return y || sk + i=1 yi .
return y − prg(z −
(η)/η
i=1
yi ).
Theorem 5.7. Given an -PRG that is both secure and KD-secure, the realization above yields a secure (GF(2η ), 1 + (η)/η, (η)/η, (η)/η)-AoNT.
The proof of this theorem is found in the full version of this paper. There we
further observe that Canetti et al.’s [3] computationally-secure AoNT built by
combining an exposure resilient function (ERF) with a pseudo-random generator
(PRG) can have an essentially arbitrarily high message length k and message
rate k/n, but cannot achieve a very high privacy threshold d.
References
1. Bernstein, D.J.: Cache-timing attacks on AES. Manuscript, April 2005. https://
cr.yp.to/antiforgery/cachetiming-20050414.pdf
Memory Erasability Ampliﬁcation
125
2. Camenisch, J., Chandran, N., Shoup, V.: A public key encryption scheme secure
against key dependent chosen plaintext and adaptive chosen ciphertext attacks.
In: Joux, A. (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 351–368. Springer,
Heidelberg (2009)
3. Canetti, R., Dodis, Y., Halevi, S., Kushilevitz, E., Sahai, A.: Exposure-resilient
functions and all-or-nothing transforms. In: Preneel, B. (ed.) EUROCRYPT 2000.
LNCS, vol. 1807, pp. 453–469. Springer, Heidelberg (2000)
4. Canetti, R., Eiger, D., Goldwasser, S., Lim, D.-Y.: How to protect yourself without
perfect shredding. In: Aceto, L., Damg˚
ard, I., Goldberg, L.A., Halld´
orsson, M.M.,
Ing´
olfsd´
ottir, A., Walukiewicz, I. (eds.) ICALP 2008, Part II. LNCS, vol. 5126, pp.
511–523. Springer, Heidelberg (2008)
5. Canetti, R., Eiger, D., Goldwasser, S., Lim, D.-Y.: How to protect yourself without
perfect shredding. Cryptology ePrint Archive, Report 2008/291 (2008)
6. Di Crescenzo, G., Ferguson, N., Impagliazzo, R., Jakobsson, M.: How to forget a
secret. In: Meinel, C., Tison, S. (eds.) STACS 1999. LNCS, vol. 1563, pp. 500–509.
Springer, Heidelberg (1999)
7. Franklin, M.K., Yung, M.: Communication complexity of secure computation
(extended abstract). In: 24th ACM STOC, pp. 699–710. ACM Press, May 1992
8. Gaˇzi, P., Maurer, U., Tackmann, B.: Manuscript. (available from the authors)
9. Gutmann, P.: Secure deletion of data from magnetic and solid-state memory. In:
Proceedings of the Sixth USENIX Security Symposium, vol. 14, San Jose, CA
(1996)
10. Hazay, C., Lindell, Y., Patra, A.: Adaptively secure computation with partial erasures. Cryptology ePrint Archive, Report 2015/450 (2015)
11. Jarecki, S., Lysyanskaya, A.: Adaptively secure threshold cryptography: introducing concurrency, removing erasures (extended abstract). In: Preneel, B. (ed.)
EUROCRYPT 2000. LNCS, vol. 1807, pp. 221–242. Springer, Heidelberg (2000)
12. Katz, J., Lindell, Y.: Introduction to Modern Cryptography. CRC Press, Boca
Raton (2015)
13. Lim, D.-Y.: The paradigm of partial erasures. Ph.D. thesis, Massachusetts Institute
of Technology (2008)
14. Maurer, U.: Constructive cryptography – a new paradigm for security denitions
and proofs. In: Mă
odersheim, S., Palamidessi, C. (eds.) TOSCA 2011. LNCS, vol.
6993, pp. 33–56. Springer, Heidelberg (2012)
15. Maurer, U., Renner, R.: Abstract cryptography. In: ICS 2011, pp. 1–21. Tsinghua
University Press, January 2011
16. Plotkin, M.: Binary codes with speciﬁed minimum distance. IRE Trans. Inf. Theor.
6(4), 445–450 (1960)
17. Reardon, J., Basin, D.A., Capkun, S.: SoK: secure data deletion. In: 2013 IEEE
Symposium on Security and Privacy, pp. 301–315. IEEE Computer Society Press,
May 2013
18. Reardon, J., Capkun, S., Basin, D.: Data node encrypted ﬁle system: eﬃcient
secure deletion for ﬂashmemory. In: Proceedings of the 21st USENIX Conference
on Security Symposium, pp. 17–17. USENIX Association (2012)
19. Reardon, J., Ritzdorf, H., Basin, D.A., Capkun, S.: Secure data deletion from
persistent media. In: ACM CCS 2013, pp. 271–284. ACM Press, November 2013
20. Yee, B.: Using secure coprocessors. Ph.D. thesis, CMU (1994)
21. Yee, B., Tygar, J.D.: Secure coprocessors in electronic commerce applications. In:
Proceedings of The First USENIX Workshop on Electronic Commerce, New York
(1995)
Multi-party Computation
On Adaptively Secure Multiparty Computation
with a Short CRS
Ran Cohen1(B) and Chris Peikert2
1
2
Department of Computer Science, Bar-Ilan University, Ramat Gan, Israel
cohenrb@cs.biu.ac.il
Computer Science and Engineering, University of Michigan, Ann Arbor, USA
cpeikert@umich.edu
Abstract. In the setting of multiparty computation, a set of mutually
distrusting parties wish to securely compute a joint function of their
private inputs. A protocol is adaptively secure if honest parties might
get corrupted after the protocol has started. Recently (TCC 2015) three
constant-round adaptively secure protocols were presented [10, 11, 15]. All
three constructions assume that the parties have access to a common reference string (CRS) whose size depends on the function to compute, even
when facing semi-honest adversaries. It is unknown whether constantround adaptively secure protocols exist, without assuming access to such
a CRS.
In this work, we study adaptively secure protocols which only rely on
a short CRS that is independent on the function to compute.
– First, we raise a subtle issue relating to the usage of non-interactive
non-committing encryption within security proofs in the UC framework, and explain how to overcome it. We demonstrate the problem
in the security proof of the adaptively secure oblivious-transfer protocol from [8] and provide a complete proof of this protocol.
– Next, we consider the two-party setting where one of the parties has
a polynomial-size input domain, yet the other has no constraints on
its input. We show that assuming the existence of adaptively secure
oblivious transfer, every deterministic functionality can be computed
with adaptive security in a constant number of rounds.
– Finally, we present a new primitive called non-committing indistinguishability obfuscation, and show that this primitive is complete
for constructing adaptively secure protocols with round complexity
independent of the function.
R. Cohen—Work supported by the European Research Council under the ERC consolidators grant agreement n. 615172 (HIPS), by a grant from the Israel Ministry of
Science, Technology and Space (grant 3-10883) and by the National Cyber Bureau
of Israel.
C. Peikert—This material is based upon work supported by the National Science
Foundation under CAREER Award CCF-1054495 and CNS-1606362, the Alfred P.
Sloan Foundation, and by a Google Research Award. The views expressed are those
of the authors and do not necessarily reﬂect the oﬃcial policy or position of the
National Science Foundation, the Sloan Foundation, or Google.
c Springer International Publishing Switzerland 2016
V. Zikas and R. De Prisco (Eds.): SCN 2016, LNCS 9841, pp. 129–146, 2016.
DOI: 10.1007/978-3-319-44618-9 7
130
1
R. Cohen and C. Peikert
Introduction
1.1
Background
In the setting of secure multiparty computation, a set of mutually distrusting
parties wish to jointly compute a function on their private inputs in a secure
manner. Loosely speaking, the security requirements ensure that even if a subset of dishonest parties collude, nothing is learned from the protocol other than
the output (privacy), and the output is distributed according to the prescribed
functionality (correctness). This threat is normally modeled by a central adversarial entity, that might corrupt a subset of the parties and control them. A
protocol is considered secure if whatever an adversary can achieve when attacking an execution of the protocol, can be emulated in an ideal world, where an
incorruptible trusted party helps the parties to compute the function.
Initial constructions of secure protocols were designed under the assumption
that the adversary is static, meaning that the set of corrupted parties is determined prior to the beginning of the protocol’s execution [20,31]. Starting from
the work of Beaver and Haber [2] and of Canetti et al. [7], protocols that remain
secure facing adaptive adversaries were considered. In this setting, the adversary
can decide which parties to corrupt during the course of the protocol and based
on its dynamic view. Adaptive security forms a greater challenge compare to
static security, in particular because the adversary can corrupt honest parties
after the protocol has completed. Furthermore, it can corrupt all the parties,
thus learning all the randomness that was used in the protocol.1
The ﬁrst adaptively secure protocol, which remains secure facing an arbitrary number of corrupted parties, was presented by Canetti et al. [8]. They
showed that under some standard cryptographic assumptions, any adaptively
well-formed functionality2 can be securely computed facing adaptive malicious
adversaries. This result follows the GMW paradigm [20], and consists of two
stages: First, a protocol secure against adaptive semi-honest adversaries was
constructed. This protocol is secure in the plain model, where no setup assumptions are needed; however, the number of communication rounds in this protocol
depends on the circuit-depth of the underlying functionality. In the second stage,
the protocol was compiled into a protocol secure against adaptive malicious
adversaries; the semi-honest to malicious compiler, presented in [8], maintains
the round complexity, and is secure assuming that all parties have access to a
common reference string (CRS).3
Recently, three adaptively secure protocols that run in a constant number of rounds were independently presented by Canetti et al. [10], Dachman1
2
3
In this work we do not assume the existence secure erasures, meaning that we do
not rely on the ability of an honest party to erase speciﬁc parts of its memory.
An adaptively well-formed functionality is a functionality that reveals its random
input in case all parties are corrupted [8].
Since the protocol of [8] is designed in the UC framework of Canetti [5], security
against malicious adversaries requires some form of a trusted-setup assumption,
see [6, 9, 27].
On Adaptively Secure Multiparty Computation with a Short CRS
131
Soled et al. [11] and Garg and Polychroniadou [15]. All three protocols are
designed in the CRS model and share the idea of embedding inside the CRS
an obfuscated program that receives the circuit to compute as one of its input
variables. It follows that the size of the CRS depends of the size of the circuit,
and moreover, the CRS is needed even when considering merely semi-honest
adversaries. Dachman-Soled et al. [11] and Garg and Polychroniadou [15] raised
the question of whether these requirements are necessary.
1.2
Our Contribution
In this work we consider adaptive security with a short CRS. By this we mean
two security notions: adaptive security facing semi-honest adversaries in the plain
model (i.e., without a CRS) and adaptive security facing malicious adversaries
in the CRS model, where the CRS does not depend on the size of the circuit to
compute.
Non-interactive Non-committing Encryption in the UC Framework. A noninteractive non-committing encryption scheme is a public-key encryption scheme
augmented with the ability to generate a fake public key and a fake ciphertext
that can later be explained as an encryption of any message. This primitive
serves as a building block for several cryptographic constructions, e.g., instantiating adaptively secure communication channels [7], adaptively secure oblivious
transfer (OT) [8] and leakage-resilient protocols [3].
Although (interactive) non-committing encryption (NCE) was introduced
well before the standard security models for adaptive security have been formalized, mainly the sequential-composition framework of [4] and the universalcomposability (UC) framework of [5], it has been a folklore belief that noninteractive NCE is secure in these frameworks. We revisit the security of noninteractive NCE and show that although it is straightforward to prove the security in the framework of sequential composition, it is not as obvious in the UC
framework. The reason lies in a subtle diﬀerence between the two frameworks:
in the framework of [4], all the parties are initialized with their inputs prior to
the beginning of the protocol, whereas in the UC framework, the environment
can adaptively provide inputs to the parties after the protocol has started.
This may lead to the following attack. The environment ﬁrst activates the
receiver that generates a public key. This is simulated by generating the (fake)
non-committing public key and ciphertext. Next, the adversary corrupts the
receiver and learns its random coins (before the sender has been activated with
input). At this point, the simulator must explain the key generation before the
plaintext has been determined. Finally, the environment activates the sender
with a random message. The problem is that once the random coins for the
key generation have been ﬁxed, the ciphertext becomes committing, and with a
non-negligible probability will fail to decrypt to the random plaintext.
Not realizing these subtleties may lead to incomplete security proofs when
using non-interactive NCE as a building block for protocols in the UC framework.
We show that the simulator can in fact cater for such form of attacks, without any
132
R. Cohen and C. Peikert
adjustments to the protocols, by carefully combining between non-committing
ciphertexts and committing ciphertexts during the simulation. We thus prove
that the deﬁnition of non-interactive NCE is valid in the UC framework. We
further show that the proof of security of the adaptively secure OT in Canetti
et al. [8] is incomplete and explain how to rectify it. We emphasize that the
results in [8] are valid, and merely the proof is incomplete.
Functionalities with One-Sided Polynomial-Size Domain. We next consider
deterministic two-party functionalities f (x1 , x2 ), where the input domain of P1 ,
denoted D1 , is of polynomial-size. We observe that in this situation, P2 can
locally compute f on its input x2 and every possible input of P1 and obtain
all possible outputs. All that P1 needs to do now is to select the output corresponding to its input x1 . Therefore, the computation of such functionalities boils
down to the ability to compute 1-out-of-|D1 | adaptively secure oblivious transfer.
Using the adaptively secure OT from [8], we conclude that for every such functionality there exists a three-message protocol that is secure in the presence of
adaptive semi-honest adversaries. Security against malicious adversaries follows
using the CLOS compiler.
This result can be interpreted in two ways. On the one hand, it shows that
restricting the domain of one of the parties yields a constant-round adaptively
secure protocol. On the other hand, it shows that in order to try and prove
a lower bound for constant-round adaptively secure protocols in general, one
must consider either functionalities with super-polynomial input domains, or
probabilistic functionalities.
Non-committing Indistinguishability Obfuscation. An indistinguishability obfuscator iO [1] is a machine that given a circuit, creates an “unintelligible” version
of it, while maintaining its functionality. “Unintelligible” means, in this case,
that given two circuits of the same length that compute exactly the same function, it is infeasible to distinguish between an obfuscation of the ﬁrst circuit from
an obfuscation of the second. This primitive has been shown to be useful for a
vast amount of applications, and recently led to a construction of constant-round
adaptively secure protocols in the CRS model [10,11,15].
All three protocols [10,11,15] share a clever idea of embedding an obfuscated
program inside the CRS, such that a certain amount of the randomness that is
used in the execution of the protocol is kept hidden, even if all parties are eventually corrupted. In this section we explore a diﬀerent approach to this problem,
inspired by the concept of NCE. We present an adaptive analogue for iO called
non-committing indistinguishability obfuscator, which essentially allows the simulator to produce an obfuscated circuit for some circuit class, and later, given
any circuit in the class, produce appropriate random coins explaining the obfuscation process. We then show that assuming the existence of non-committing
iO, every adaptively well-formed functionality can be computed with adaptive
security and round complexity that is independent of the functionality.
On Adaptively Secure Multiparty Computation with a Short CRS
133
We emphasize that currently we do not know how to construct noncommitting iO, or even if such a construction is possible. Rather, this result
serves as a reduction from the problem of constructing adaptively secure protocols with round complexity independent of the function to the problem of
constructing non-committing iO. We note that the cryptographic literature has
previously considered several complete primitives that cannot be instantiated
in the plain model, e.g., “simultaneous broadcast” which is complete for partial fairness [24] and “fair reconstruction” which is complete for complete fairness [21]. In contrast, no such lower bound is known for the complete primitive
presented in this work. We leave it as an interesting open question to determine whether non-committing iO can be instantiated in the plain model under
standard assumptions or not.
By a non-committing indistinguishability obfuscator for some class of equivalent circuits (i.e., circuits that compute the same function), we mean an iO
scheme for this class, augmented with a simulation algorithm that generates an
˜ such that later, given any circuit C from the class, it is
obfuscated circuit C,
possible to generate random coins that explain the obfuscated circuit C˜ as an
obfuscation of the circuit C. It is not hard to see that if non-committing iO
schemes exist in general, then the polynomial hierarchy collapses (see Sect. 5).
In order to overcome this barrier, we consider a limited set of circuit classes,
which turns out to be suﬃcient for our needs. In particular, we consider classes
of equivalent “constant circuits”, i.e., all circuits in the class are of the same size,
receive no input and output the same value.
We next explain how to use non-committing iO in order to construct a protocol for any two-party functionality f , where the round complexity depends on
the obfuscator rather than on f (this idea extends in a straightforward way to
the multiparty setting). First, the parties use any adaptively secure protocol,
e.g., the protocol from [8], to compute an intermediate functionality that given
the parties’ inputs and a circuit to compute f , hard-wires the input values to
the input wires of the circuit. This way the intermediate functionality generates
a “constant circuit” computing the desired output. Next, the intermediate functionality obfuscates this “constant circuit” using random coins provided by the
parties and outputs to each party an obfuscated constant circuit. Finally, each
party locally computes the output of the obfuscated constant circuit.
The underlying idea is that upon the ﬁrst corruption request, the idealprocess adversary learns both the input and the output of the corrupted party,
and so can prepare a simulated obfuscated constant circuit that outputs the
correct value. Upon the second corruption request, the ideal-process adversary
learns the input of the second party and can prepare the constant circuit as generated by the intermediate functionality. Using the non-committing properties
of the obfuscation, the random coins explaining the obfuscated circuit can be
computed at this point, and so the ideal-process adversary can correctly adjust
the random coins that are used for the obfuscation.