Last corrections
parent
e123fa3683
commit
13474a1adb

@ 3,25 +3,25 @@


\end{comment}




In this thesis, we presented new cryptographic schemes that rely on lattice or pairing assumptions.


These contributions focus on the design and analysis of new cryptographic schemes that target privacypreserving applications.


These contributions focus on the design and the analysis of new cryptographic schemes that target privacypreserving applications.




In pairingbased cryptography, we proposed a practical dynamic group signature scheme, whose security relies on wellunderstood assumptions in the random oracle.


It relies on widely used assumptions with simple and constantsize descriptions which have been studied for more than ten years.


This work is also supported by an implementation in \texttt{C}.




The results in the lattice setting give rise to three realizations of fundamental primitives that were missing in the landscape of latticebased privacypreserving cryptography.


The results in the lattice setting gave rise to three realizations of fundamental primitives that were missing in the landscape of latticebased privacypreserving cryptography.


Even if these schemes suffer from a lack of efficiency due to their novelty, we do believe that they take one step towards a quantumsecure privacyfriendly world.




On the road, improvements have been made in the state of the art of zeroknowledge proofs in the lattice setting by providing building blocks that, we believe, are of independent interest.


For example, our signature with efficient protocols has already been used to design a privacypreserving latticebased ecash system~\cite{LLNW17}.




All these works are proven under strong security models under simple assumptions.


All these works are proven to satisfy strong security models under simple assumptions.


This provides a breeding ground for new theoretical constructions.




\section*{Open Problems}




The path of providing new cryptographic primitives and proving them secure is full of pitfalls.


The most obvious questions that stem from this work are how to tackle the tradeoffs we made in the design of those primitives.


The most obvious question that stems from this work is how to tackle the tradeoffs we made in the design of those primitives. In particular, the specific question naturally arise:




\begin{question}


Is it possible to build a fullysimulatable adaptive oblivious transfer (even without access control) secure under $\LWE$ with polynomially large modulus?



@ 29,8 +29,8 @@ The most obvious questions that stem from this work are how to tackle the trade




In other words, is it possible to avoid the use of noise flooding to guarantee receiversecurity in the adaptive oblivious transfer scheme of~\cref{ch:otlwe}.


In our current protocol, this issue arises from the use of Regev's encryption scheme, where we need to prevent the noise distribution from leaking the receiver's index.


However, while a finer analysis on GSW ciphertexts~\cite{GSW13} seems promising to achieve this at reasonable cost~\cite{BDPMW16}, it is not sufficient in our setting because it would leak the norm of the noise vector of ciphertexts.


Then, the main difficulty is to have zeroknowledge proofs compatible with the access control and the encryption components.


However, while a finer analysis of the noise in GSW ciphertexts~\cite{GSW13} seems promising to achieve this at reasonable cost~\cite{BDPMW16}, it is not sufficient in our setting because it would leak the norm of the noise vector of ciphertexts.


Then, another difficulty is to have zeroknowledge proofs compatible with the access control and the encryption components.




\begin{question}


Can we construct provablysecure adaptive oblivious transfer schemes in the universal composability model?



@ 38,8 +38,8 @@ Then, the main difficulty is to have zeroknowledge proofs compatible with the a




Our adaptive oblivious transfer scheme relies on zeroknowledge proofs to hedge against malicious adversaries.


The security proofs take advantage of the fact that the proofs can be rewound to extract a witness (as described in~\cref{de:pok}).


The PeikertVaikuntanathanWaters~\cite{PVW08} construction, based on dualmode encryption, achieves $1$outof$2$ composable oblivious transfer (which can be generalized to $1$outof$2^t$ OT), without relying on zeroknowledge proofs, but it does not implies OT with adaptive queries (i.e., where each index $\rho_i$ may depend on messages received in previous transfers).


Actually, the use of $\ZK$ proofs is not impossible in this setting, as shown by the pairingbased construction of Green and Hohenberger~\cite{GH08}.


The PeikertVaikuntanathanWaters~\cite{PVW08} construction, based on dualmode encryption, achieves $1$outof$2$ composable oblivious transfer (which can be generalized to $1$outof$2^t$ OT), without relying on zeroknowledge proofs, but it does not imply OT with adaptive queries (i.e., where each index $\rho_i$ may depend on messages received in previous transfers).


Actually, the use of $\ZK$ proofs is not ruled out in this setting, as shown by the pairingbased construction of Green and Hohenberger~\cite{GH08}.


However, this protocol uses the trapdoor extractability of GrothSahai proofs~\cite{GS08} to achieve straightline extraction. It is not known to be possible in the lattice setting.




\begin{question}



@ 47,8 +47,8 @@ However, this protocol uses the trapdoor extractability of GrothSahai proofs~\c


\end{question}




Another privacypreserving primitive is compact ecash~\cite{Cha82,Cha83,CHL05a}. As explained in the introduction, it is the digital equivalent of reallife money.


A body of research followed its introduction~\cite{CFN88,OO91,CP92,FY93,Oka95,Tsi97}, and the first compact realization was given by Camenisch, Hohenberger and Lysyanskaya~\cite{CHL05a} (``compact'' means that the complexity of coin transfers is at most logarithmic in the value of withdrawn wallets).


Before the work of Libert, Ling, Nguyen and Wang~\cite{LLNW17}, all compact constructions were based on discretelogarithmbased technique.


A body of research followed its introduction~\cite{CFN88,OO91,CP92,FY93,Oka95,Tsi97}, and the first compact realization was given by Camenisch, Hohenberger and Lysyanskaya~\cite{CHL05a} (here, ``compact'' means that the complexity of coin transfers is at most logarithmic in the value of withdrawn wallets).


Before the work of Libert, Ling, Nguyen and Wang~\cite{LLNW17}, all compact constructions were based on traditional numbertheoretic techniques.


This construction still suffers from efficiency issues akin to the problem we met in this thesis.


It is thus interesting to improve the efficiency of this scheme and obtain viable constructions of anonymous ecash from postquantum assumptions.





@ 60,10 +60,10 @@ It is thus interesting to improve the efficiency of this scheme and obtain viabl




Extending the work of Groth, Ostrovsky and Sahai~\cite{GOS06} to the lattice setting would be a breakthrough result for latticebased cryptography in general.


This question remains open for more than $10$ years~\cite{PV08}.


A recent line of work makes steps forward in this direction~\cite{KW18,RSS18}, but rely on primitives that do not exist yet~\cite{RSS18} ($\NIZK$ proofs for a variant of the bounded decoding distance problem) or assume preprocessing~\cite{KW18}.


A recent line of work makes steps forward in this direction~\cite{KW18,RSS18}, but they rely on primitives that do not exist yet~\cite{RSS18} ($\NIZK$ proofs for a variant of the bounded decoding distance problem) or assume preprocessing~\cite{KW18}.




The Sternlike proof systems we studied in this thesis, despite being flexible enough to prove a large variety of statements, suffer from the stiffness of being combinatorial.


The choice of permutations used to ensure the zeroknowledge property (and thus witnessindistinguishability) is quite strict, and force the challenge space to be ternary.


The choice of permutations used to ensure the zeroknowledge property (and thus witnessindistinguishability) is quite strict, and forces the challenge space to be ternary.


This turns out to be a real bottleneck in the efficiency of such proof systems.




\begin{question}



@ 71,14 +71,13 @@ This turns out to be a real bottleneck in the efficiency of such proof systems.


Can we get negligible soundness error in one shot for expressive statements in the postquantum setting?


\end{question}




This question can be restated as ``can we combine the expressivity of Sternlike proofs with the efficiency of Schnorrlike proof with rejection sampling?''.


For Sternlike proofs, decreasing the soundness error from $2/3$ to $1/2$ would already be an interesting improvements with a direct impact on all latticebased schemes presented in this thesis.


Recall that \textit{soundness error} is the probability that a cheating prover convinces an honest verifier of a false statement. As long as it is noticeably different from $1$, it is possible to make the soundness error negligible by repeating the protocol a sufficient number of times.


This question can be restated as ``can we combine the expressiveness of Sternlike proofs with the efficiency of Schnorrlike proof with rejection sampling?''.


For Sternlike protocols, decreasing the soundness error from $2/3$ to $1/2$ would already be an interesting improvements with a direct impact on the efficiency of all latticebased schemes presented in this thesis.


Recall that the \textit{soundness error} is the probability that a cheating prover convinces an honest verifier of a false statement. As long as it is noticeably different from $1$, it is possible to make the soundness error negligible by repeating the protocol a sufficient number of times.


Likewise, isogenybased proof systems~\cite{JDF11,GPS17} suffer from similar issues as the challenge space is small (binary).


The $2/3$ soundness error is also present in~\cite{IKOS07},


which is a technique to obtain zeroknowledge proofs relying on secure multiparty computation.


With this technique, however, the size of the proof is proportional to the size of the circuit describing the relation we want to prove (which is not the case with Sternlike protocols).


On the other hand, the soundness error of one round of the protocol is at most $2/3$.


Thus, the question of having efficient postquantum zeroknowledge proofs for expressive statements is a difficult question and remains open as of today.




%If these proof systems can be used after applying a transformation from averagecase to worstcase problem, this methodology is highly inefficient and does not close the question.



@ 90,21 +89,21 @@ Thus, the question of having efficient postquantum zeroknowledge proofs for ex


\end{question}




In the general lattice setting, the most efficient signature schemes require at least as many matrices as the length $\ell$ of the random tag used in the signature (like the scheme in~\cref{se:gslwesigep}).


This cost has direct impact on the efficiency and publickey size of schemes or protocols that use them, like in our group signatures of~\cref{ch:gslwe}, where $\ell$ is logarithmic in the maximal number of members the group can accept $\Ngs$.


This cost has direct impact on the efficiency and publickey size of schemes or protocols that use them: in our group signatures of~\cref{ch:gslwe}, for example, $\ell$ is logarithmic in the maximal number of members the group can accept $\Ngs$.


In ideal lattices, it is possible to reduce this cost to a vector of size $\ell$~\cite{DM14}.


In the group signature scheme of~\cite{LNWX18}, which is based on ideal lattice problems, they use this property to allow an exponential number of group members to join the group, and thus propose a ``constantsize'' group signature scheme.


The method used to construct this group signature is essentially the same as in \cref{ch:gslwe}, where matrices are hidden in the ring structure of the ideal lattice~\cite{LS14}.


Hence, the dependency on $\log \Ngs$ is actually hidden in the dimension of the ring.


As these signatures are a fundamental building block for privacypreserving cryptography, any improvement on them has a direct impact on the primitives that use them as a building block.


In the construction of~\cite{LNWX18}, the dependency on $\log \Ngs$ is actually hidden in the dimension of the ring.


As these signatures are a fundamental building block for privacypreserving cryptography, any improvement on them has a direct impact on the primitives or protocols that use them as a building block.




\begin{question}


Can we obtain more efficient latticebased onetime signatures in general lattices?


\end{question}




In our group signature and group encryption schemes (in \cref{ch:gslwe} and \cref{ch:gelwe} respectively), the signature and the ciphertext contain a public key for a onetime signature scheme.


In our group signature and group encryption schemes (in \cref{ch:gslwe} and \cref{ch:gelwe} respectively), signature and ciphertext contain a public key for a onetime signature scheme.


One efficiency issue is that, in latticebased onetime signatures~\cite{LM08,Moh11}, the publickey contains a full matrix, that is part of the signature/ciphertext.


Therefore, this matrix significantly increase the size of the signature/ciphertext.


As security requirements for onetime signature are weaker than fullfledged signatures (namely, the adversary has access to only one signature per public key), we can hope for constructions of onetime signatures based on general lattices where the publickey is smaller that a fullmatrix.


As security requirements for onetime signature are weaker than those of fullfledged signatures (namely, the adversary has access to only one signature per public key), we can hope for more efficient constructions of onetime signatures based on general lattices where, the publickey is smaller that a fullmatrix.




As we explained in the introduction, advanced cryptography from lattices often suffers from the use of lattice trapdoors.


Thus, a natural question may be:



@ 115,10 +114,10 @@ Thus, a natural question may be:




In the group encryption scheme of~\cref{ch:gelwe}, for instance, trapdoors are used for two distinct purposes.


They are used to build a secure publickey encryption scheme under adaptive chosenciphertext attacks and a signature scheme.


These primitives are both induced by identitybased encryption: the CanettiHaleviKatz transform generically turns an \textsf{IBE} into a \textsf{INDCCA2} \PKE~\cite{CHK04}, and signatures are directly implied from \textsf{INDCPA}secure \textsf{IBE}~\cite{BF01,BLS01}.


These primitives are both induced by identitybased encryption: the CanettiHaleviKatz transform generically turns an \textsf{IBE} into a \textsf{INDCCA2} \PKE~\cite{CHK04}, and signatures are directly implied by \textsf{INDCPA}secure \textsf{IBE}~\cite{BF01,BLS01}.


%Actually, even the question of having a trapdoorless \textsf{INDCCA2} public key encryption scheme still remains an open question.


Actually, a recent construction from Brakerski, Lombardi, Segev and Vaikuntanathan~\cite{BLSV18} (inspired by~\cite{DG17a}) gives a candidate which relies on garbled circuits, and is fairly inefficient compared to \textsf{IBE} schemes with trapdoors.


Even the question of a trapdoorless \textsf{INDCCA2} public key encryption still does not have a satisfactory response.


Actually, a recent construction due to Brakerski, Lombardi, Segev and Vaikuntanathan~\cite{BLSV18} (inspired by~\cite{DG17a}) gives a candidate which relies on garbled circuits, and is fairly inefficient compared to \textsf{IBE} schemes with trapdoors.


Even the question of a trapdoorless \textsf{INDCCA2} public key encryption still does not have a satisfactory solution.


The construction of Peikert and Waters~\cite{PW08} is trapdoorfree, but remains very expensive.




\begin{comment}




Loading…
Reference in New Issue