Corrections chapstructures
parent
71e9abec4c
commit
efde1d281b

@ 1,16 +1,16 @@


In the previous chapter, we saw that theoretical cryptography has to rely on \emph{computational hardness assumptions}.


Beside \emph{information theorybase cryptography}, most hardness assumptions are built on top of algebraic structures.


In the previous chapter, we saw that cryptography has to rely on \emph{computational hardness assumptions}.


Besides \emph{informationtheoretic cryptography}, most hardness assumptions are built on top of suitable algebraic structures.


For instance the discrete logarithm assumption (Definition~\ref{de:DLP}) is based on a cyclic group structure.


That is, in some groups it is assumed that computing the discrete logarithm is an intractable problem for any probabilistic polynomial time algorithms.


%That is, in some groups it is assumed that computing the discrete logarithm is an intractable problem for any probabilistic polynomial time algorithms.




The existence of these structures proves useful when it comes to design protocols.


For that, constructions takes advantage of the mathematical properties of the structure to allow the functionality.


An example is the multiplicative homomorphism of the ElGamal cryptosystem which is possible using the structure of the underlying cyclic group $\GG$ on which the scheme is built upon.


The existence of such structures proves useful when it comes to designing protocols.


For this purpose, constructions take advantage of the mathematical properties of the structure to enable the functionality.


An example is the multiplicative homomorphism of the ElGamal cryptosystem which is made possible by underlying cyclic group structure.


%Namely, an El Gamal encryption of a message $M$ under the public key $h = g^\alpha_{} \in \GG$ is a couple $(c_1^{}, c_2^{}) = (g^r_{}, M \cdot h^r_{}) \in \GG^2_{}$, which can be decrypted with the knowledge of the secret key $\alpha \in \Zp$: $M = c_2^{} \cdot c_1^{\alpha}$.


%Then, the cyclic group structure of $\GG$ leads to the ability to compute a valid ciphertext for $M \cdot M'$ given ciphertexts $(c_1^{}, c_2^{})$ and $(c'_1, c'_2)$ of $M$ and $M'_{}$ respectively.


%The resulting ciphertext is $(c_1^{} \cdot c'_1, c_2^{} \cdot c'_2) = (g^{r \cdot r'_{}}, M \cdot M' \cdot h^{r \cdot r'_{}})$




In this chapter, we describe the different structures on which the cryptography primitives we design in this thesis are based on, namely bilinear groups and lattices, as well as related hardness assumptions.


In this chapter, we describe the different structures on which the cryptographic primitives we design in this thesis are based on, namely bilinear groups and lattices, as well as related hardness assumptions.




\section{PairingBased Cryptography}


\addcontentsline{tof}{section}{\protect\numberline{\thesection} Cryptographie à base de couplage}





@ 3,14 +3,14 @@


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%




During the last decade, latticebased cryptography has emerged as a promising candidate for postquantum cryptography.


For example, on the first round of the NIST postquantum competition, there are 28 out of 82 submissions from latticebased cryptography~\cite{NIS17}.


Latticebased cryptography takes advantage of a simple mathematical structure, the socalled lattices, in order to provide beyond encryption and signature cryptography.


For instance, fully homomorphic encryption~\cite{Gen09,GSW13} are only possible in the latticebased world for now.


For example, on the first round of the NIST postquantum competition, there are 28 out of 82 submissions stem from latticebased cryptography~\cite{NIS17}.


Latticebased cryptography takes advantage of a simple mathematical structure in order to realize advanced functionalities, beyond encryption and signature schemes.


For instance, fully homomorphic encryption~\cite{Gen09,GSW13} is only known to be possible in the latticebased world for now.




In the context of provable security, lattice assumptions benefit from a worstcase to averagecase reduction~\cite{Reg05,GPV08,MP12,AFG14}.


In the context of provable security, lattice assumptions benefit from a worstcasetoaveragecase reduction~\cite{Reg05,GPV08,MP12,AFG14}.


Concurrently, worstcase lattice problems have been extensively analyzed in the last decade~\cite{ADS15,ADRS15,HK17}, both classically and quantumly.




This gives us a good confidence in the latticebased assumptions (given the \emph{caveats} of Chapter~\ref{ch:proofs}) such as Learning with Errors ($\LWE$) and Short Integer Solutions ($\SIS$) that are defined in Section~\ref{sse:latticeproblems}. The rest of this section will describe some useful algorithms that relies on \emph{lattice trapdoors}.


This gives us a good confidence in lattice assumptions (given the \emph{caveats} of Chapter~\ref{ch:proofs}) such as LearningwithErrors ($\LWE$) and Short Integer Solutions ($\SIS$) which are defined in Section~\ref{sse:latticeproblems}. The rest of this section will describe some useful tools that rely on \emph{lattice trapdoors}.




\subsection{Lattices and Hard Lattice Problems}


\label{sse:latticeproblems}



@ 23,7 +23,7 @@ This gives us a good confidence in the latticebased assumptions (given the \emp


\label{fig:latticebasis}


\end{figure}




A (fullrank) lattice~$\Lambda$ is defined as the set of all integer linear combinations of some linearly independent basis vectors~$(\mathbf{b}_i^{})^{}_{1\leq i \leq n}$ belonging to some~$\RR^n_{}$.


A (fullrank) lattice~$\Lambda$ is defined as the set of all integer linear combinations of some linearly independent basis vectors~$(\mathbf{b}_i^{})^{}_{1\leq i \leq n}$ of~$\RR^n_{}$.


The integer~$n$ denotes the \emph{dimension} of the lattice.


A lattice basis is not unique, as illustrated in Figure~\ref{fig:latticebasis} with a dimension $2$ lattice.


In the following, we work with $q$ary lattices, for some prime number $q$, defined as follows.



@ 36,7 +36,7 @@ In the following, we work with $q$ary lattices, for some prime number $q$, defi


\Lambda_q^{\mathbf{u}} (\mathbf{A}) & \triangleq \{\mathbf{e} \in \ZZ^m_{} \mid \mathbf{A} \cdot \mathbf{e} = \mathbf{u} \bmod q \}.


\end{align*}




For any lattice point $\mathbf{t} \in \Lambda_q^{\mathbf{u}} (\mathbf{A})$, it holds that $\Lambda_q^{\mathbf{u}}(\mathbf{A})=\Lambda_q^{\perp}(\mathbf{A}) + \mathbf{t}$. Meaning that $\Lambda_q^{\mathbf{u}} (\mathbf{A}) $


For any lattice point $\mathbf{t} \in \Lambda_q^{\mathbf{u}} (\mathbf{A})$, it holds that $\Lambda_q^{\mathbf{u}}(\mathbf{A})=\Lambda_q^{\perp}(\mathbf{A}) + \mathbf{t}$, meaning that $\Lambda_q^{\mathbf{u}} (\mathbf{A}) $


is a shift of $\Lambda_q^{\perp} (\mathbf{A})$.


\end{definition}





@ 56,38 +56,38 @@ $\Pr_{\mathbf{b} \sample D_{\Lambda,\sigma}} \left[ \\mathbf{b}\ \leq \sigma \




In order to work with lattices in cryptography, hard lattice problems have to be defined~\cite{Ajt96}.


In the following we state the \textit{Shortest Independent Vectors Problem}~($\SIVP$).


This problem reduces to the \textit{Learning With Errors}~($\LWE$) problems and the Short Integer Solution~($\SIS$) problem as explained later in \cref{le:sishard} and~\ref{le:lwehard}.


These links are important as those are ``worstcase to averagecase'' reductions.


This problem reduces to the \textit{LearningwithErrors}~($\LWE$) problems and the Short Integer Solution~($\SIS$) problem as explained later in \cref{le:sishard} and~\ref{le:lwehard}.


These links are important as those are ``worstcasetoaveragecase'' reductions.




In other words, the $\SIVP$ assumption by itself is not very handy to manipulate in order to construct new cryptographic designs.


By itself, the $\SIVP$ assumption is not very handy in order to construct new cryptographic designs.


On the other hand, the $\LWE$ and $\SIS$ assumptions \,which are ``averagecase'' assumptions\, are more suitable to design cryptographic schemes.




In order to define the $\SIVP$ problem and assumption, let us first define the successive minima of a lattice, a generalization of the minimum of a lattice (the length of a shortest nonzero vector in a lattice).


In order to define the $\SIVP$ problem and assumption, let us first define the successive minima of a lattice, a generalization of the minimum of a lattice (i.e., the length of a shortest nonzero vector in a lattice).




\begin{definition}[Successive minima] \label{de:latticelambda}


For a lattice $\Lambda$ of dimension $n$, let us define for $i \in \{1,\ldots,n\}$ the $i$th successive minimum as


\[ \lambda_i(\Lambda) = \inf \bigl\{ r \mid \dim \left( \Span\left(\lambda \cap \mathcal B\left(\mathbf{0}, r \right) \right) \right) \geq i \bigr\}, \]


For a lattice $\Lambda$ of dimension $n$, let us define for each~$i \in \{1,\ldots,n\}$ the $i$th successive minimum as


\[ \lambda_i(\Lambda) = \inf \bigl\{ r \mid \dim \left( \Span\left(\Lambda \cap \mathcal B\left(\mathbf{0}, r \right) \right) \right) \geq i \bigr\}, \]


where $\mathcal B(\mathbf{c}, r)$ denotes the ball of radius $r$ centered in $\mathbf{c}$.


\end{definition}




This leads us to the $\SIVP$ problem, which is finding a set of sufficiently short linearly independent vectors given a lattice basis.


This leads us to the $\SIVP$ problem, which is to find a set of sufficiently short linearly independent vectors given a lattice basis.




\begin{definition}[$\SIVP$] \label{de:sivp}


For a dimension $n$ lattice described by a basis $\mathbf{B} \in \RR^{n \times m}$, and a parameter $\gamma > 0$, the shortest independent vectors problem is to find $n$ linearly independent vectors $v_1, \ldots, v_n$ such that $\ v_1 \ \leq \ v_2 \ \leq \ldots \leq \ v_n \$ and $\v_n\ \leq \gamma \cdot \lambda_n(\mathbf{B})$.


For a dimension$n$ lattice described by a basis $\mathbf{B} \in \RR^{n \times m}$, and a parameter $\gamma > 0$, the shortest independent vectors problem is to find $n$ linearly independent vectors $v_1, \ldots, v_n$ such that $\ v_1 \ \leq \ v_2 \ \leq \ldots \leq \ v_n \$ and $\v_n\ \leq \gamma \cdot \lambda_n(\mathbf{B})$.


\end{definition}




As explained before, the hardness of this assumption for worstcase lattices implies the hardness of the following two assumptions in their averagecase setting, which are illustrated in Figure~\ref{fig:lwesis}.


In other words, it means that no polynomial time algorithms can solve those problems with nonnegligible probability and nonnegligible advantage given that $\SIVP$ is hard.


In particular, it means that no polynomialtime algorithm can solve those problems with nonnegligible probability and nonnegligible advantage given that $\SIVP$ is hard.


%As explained before, we will rely on the assumption that both algorithmic problems below are hard. Meaning that no (probabilistic) polynomial time algorithms can solve them with nonnegligible probability and nonnegligible advantage, respectively.




\begin{definition}[The $\SIS$ and $\ISIS$ problem] \label{de:sis} \index{Lattices!Short Integer Solution} \index{Lattices!Inhomogeneous \SIS}


Let~$m,q,\beta$ be functions of~$n \in \mathbb{N}$.


Let~$m,q,\beta$ be functions of~$n \in \mathbb{N}$ and $\\cdot\$ be a norm (e.g., Euclidean norm $\\cdot\_2$ or infinite norm $\\cdot\_\infty$).


The \textit{Short Integer Solution} problem $\SIS_{n,m,q,\beta}$ is, given~$\mathbf{A} \sample \U(\Zq^{n \times m})$, find~$\mathbf{x} \in \Lambda_q^{\perp}(\mathbf{A})$ with~$0 < \\mathbf{x}\ \leq \beta$.




The \textit{Inhomogeneous Short Integer Solution}~$\ISIS_{n,m,q,\beta}$ problem is, given~$\mathbf{A} \sample \U(\Zq^{n \times m})$ and $\mathbf{u} \in \Zq^n$, find~$\mathbf{x} \in \Lambda_q^{\mathbf{u}}(\mathbf{A})$ with~$0 < \ \mathbf{x} \ \leq \beta$.


\end{definition}




Evidences of the hardness of the $\SIS$ and $\ISIS$ assumptions are given by the following Lemma, which reduced these problems from $\SIVP$.


Evidence of the hardness of the $\SIS$ and $\ISIS$ assumptions is given by the following Lemma, which reduced these problems from $\SIVP$.




\begin{lemma}[{\cite[Se.~9]{GPV08}}] \label{le:sishard}


If~$q \geq \sqrt{n} \beta$ and~$m,\beta \leq \mathsf{poly}(n)$, then $\SIS_{n,m,q,\beta}$ and $\ISIS_{n,m,q,\beta}$ problems are both at least as hard as



@ 96,8 +96,8 @@ Evidences of the hardness of the $\SIS$ and $\ISIS$ assumptions are given by the




\begin{definition}[The $\LWE$ problem] \label{de:lwe} \index{Lattices!Learning With Errors}


Let $n,m \geq 1$, $q \geq 2$, and let $\chi$ be a probability distribution on~$\mathbb{Z}$.


For $\mathbf{s} \in \mathbb{Z}_q^n$, let $A_{\mathbf{s}, \chi}$ be the distribution obtained by sampling $\mathbf{a} \hookleftarrow \U(\mathbb{Z}_q^n)$ and $e \hookleftarrow \chi$, and outputting $(\mathbf{a}, \mathbf{a}^T\cdot\mathbf{s} + e) \in \mathbb{Z}_q^n \times \mathbb{Z}_q$.


The Learning With Errors problem $\mathsf{LWE}_{n,q,\chi}$ asks to distinguish~$m$ samples chosen according to $\mathcal{A}_{\mathbf{s},\chi}$ (for $\mathbf{s} \hookleftarrow \U(\mathbb{Z}_q^n)$) and $m$ samples chosen according to $\U(\mathbb{Z}_q^n \times \mathbb{Z}_q)$.


For a fixed $\mathbf{s} \in \mathbb{Z}_q^n$, let $A_{\mathbf{s}, \chi}$ be the distribution obtained by sampling $\mathbf{a} \hookleftarrow \U(\mathbb{Z}_q^n)$ and $e \hookleftarrow \chi$, and outputting $(\mathbf{a}, \mathbf{a}^T\cdot\mathbf{s} + e) \in \mathbb{Z}_q^n \times \mathbb{Z}_q$.


The \emph{LearningwithErrors} problem $\mathsf{LWE}_{n,q,\chi}$ asks to distinguish~$m$ samples chosen according to $\mathcal{A}_{\mathbf{s},\chi}$ (for $\mathbf{s} \hookleftarrow \U(\mathbb{Z}_q^n)$) and $m$ samples chosen according to $\U(\mathbb{Z}_q^n \times \mathbb{Z}_q)$.


\end{definition}




\begin{figure}



@ 107,7 +107,7 @@ Evidences of the hardness of the $\SIS$ and $\ISIS$ assumptions are given by the


\label{fig:lwesis}


\end{figure}




The worstcase to averagecase reduction for $\LWE$ is stated by the following Lemma.


The worstcasetoaveragecase reduction for $\LWE$ is stated by the following Lemma.




\begin{lemma}[{\cite{Reg05,Pei09,BLP+13}}] \label{le:lwehard}


If $q$ is a prime power, $B \geq \sqrt{n}\omega(\log n)$, $\gamma= \widetilde{\mathcal{O}}(nq/B)$, then there exists an efficient sampleable $B$bounded distribution~$\chi$ ({i.e.}, $\chi$ outputs samples with norm at most $B$ with overwhelming probability) such that $\mathsf{LWE}_{n,q,\chi}$ is as least as hard as $\mathsf{SIVP}_{\gamma}$.



@ 121,12 +121,13 @@ The worstcase to averagecase reduction for $\LWE$ is stated by the following L


\label{sse:latticetrapdoors}


\addcontentsline{tof}{subsection}{\protect\numberline{\thesubsection} Trappes d'un réseau euclidien}




In this section, we state the different algorithms that use ``\textit{lattice trapdoors}''.


A trapdoor for lattice $\Lambda$ is a \textit{short} basis of this lattice.


The knowledge of such a basis allows to sample elements in $D_{\Lambda, \sigma}$ within some restrictions given in~\cref{le:GPV}.


The existence of this sampler permits to solve hard lattice problems such as $\SIS$, which is assumed to be intractable in polynomial time.


In this section, we recall the specifications of different algorithms that use ``\textit{lattice trapdoors}''.


A trapdoor for a lattice $\Lambda$ is a \textit{short} basis of this lattice.


The knowledge of such a basis allows sampling elements in $D_{\Lambda, \sigma}$ within some restrictions given in~\cref{le:GPV}.


The existence of this sampler allows sampling short vectors which is believed to be infeasible without knowing such a short basis.


%permits to solve hard lattice problems such as $\SIS$, which is assumed to be intractable in polynomial time.


Indeed,~\cref{le:TrapGen} shows that it is possible to sample a (statistically close to) uniform matrix $\mathbf{A} \in \ZZ_q^{n \times m}$ along with a short basis for $\Lambda^\perp_{q}(\mathbf{A})$.


Thus, a vector sampled in $D_{\Lambda^\perp_{q}(\mathbf{A}), \sigma}$, which is short with overwhelming probabilities according to~\cref{le:small}, is a solution to $\SIS_{n,m,q,\sigma \sqrt{n}}$.


Thus, a vector sampled from $D_{\Lambda^\perp_{q}(\mathbf{A}), \sigma}$, which is short with overwhelming probabilities according to~\cref{le:small}, is a solution to $\SIS_{n,m,q,\sigma \sqrt{n}}$.




Gentry {\em et al.}~\cite{GPV08} showed that Gaussian distributions with lattice support can be sampled efficiently given a sufficiently short basis of the lattice.





@ 134,13 +135,13 @@ Gentry {\em et al.}~\cite{GPV08} showed that Gaussian distributions with lattice




\begin{lemma}[{\cite[Le.~2.3]{BLP+13}}]


\label{le:GPV}


There exists a $\ppt$ (probabilistic polynomialtime) algorithm $\GPVSample$ that takes as inputs a


There exists a $\ppt$ (probabilistic polynomialtime) algorithm $\GPVSample$ that inputs a


basis~$\mathbf{B}$ of a lattice~$\Lambda \subseteq \ZZ^n$ and a


rational~$\sigma \geq \\widetilde{\mathbf{B}}\ \cdot \Omega(\sqrt{\log n})$,


and outputs vectors~$\mathbf{b} \in \Lambda$ with distribution~$D_{\Lambda,\sigma}$.


\end{lemma}




The following Lemma states that it is possible to efficiently compute a uniform~$\mathbf{A}$ along with a short basis of its orthogonal lattice $\Lambda^{\perp}_q(\mathbf{A})$.


The following Lemma states that it is possible to efficiently compute a statistically uniform~$\mathbf{A}$ along with a short basis of its orthogonal lattice $\Lambda^{\perp}_q(\mathbf{A})$.




%We


%use an algorithm that jointly samples a uniform~$\mathbf{A}$ and a short



@ 153,7 +154,7 @@ There exists a $\ppt$ algorithm $\TrapGen$ that takes as inputs $1^n$, $1^m$ and




\noindent Lemma~\ref{le:TrapGen} is often combined with the sampler from Lemma~\ref{le:GPV}. Micciancio and Peikert~\cite{MP12} proposed a more efficient approach for this combined task, which is to be be preferred in practice but, for the sake of simplicity, schemes are presented using $\TrapGen$ and $\GPVSample$ in this thesis.




We also make use of an algorithm that extends a trapdoor for~$\mathbf{A} \in \ZZ_q^{n \times m}$ to a trapdoor of any~$\mathbf{B} \in \ZZ_q^{n \times m'}$ whose left~$n \times m$ submatrix is~$\mathbf{A}$.


We also make use of an algorithm that extends a trapdoor for~$\mathbf{A} \in \ZZ_q^{n \times m}$ to a trapdoor of any~$\mathbf{B} \in \ZZ_q^{n \times m'}$ for which a $m$subset of its columns is $\mathbf{A}$. For the sake of simplicity we will consider the case where~$\mathbf{A}$ is the left~$n \times m$ submatrix of~$\mathbf{B}$.




\begin{lemma}[{\cite[Le.~3.2]{CHKP10}}]\label{lem:extbasis}


There exists a $\ppt$ algorithm $\ExtBasis$ that takes as inputs a



@ 165,7 +166,7 @@ We also make use of an algorithm that extends a trapdoor for~$\mathbf{A} \in \ZZ


\leq \\widetilde{\mathbf{T}_{\mathbf{A}}}\$.


\end{lemma}




In some of our security proofs, analogously to \cite{Boy10,BHJ+15} we also use a technique due to Agrawal, Boneh and Boyen~\cite{ABB10} that implements an allbutone trapdoor mechanism (akin to the one of Boneh and Boyen \cite{BB04}) in the lattice setting.


In some of our security proofs, analogously to \cite{Boy10,BHJ+15}, we also use a technique due to Agrawal, Boneh and Boyen~\cite{ABB10} that implements an allbutone trapdoor mechanism (akin to the one of Boneh and Boyen \cite{BB04}) in the lattice setting.




\begin{lemma}[{\cite[Th.~19]{ABB10}}]\label{lem:sampler}


There exists a $\ppt$ algorithm $\SampleR$ that takes as inputs matrices $\mathbf{A}, \mathbf{C} \in \ZZ_q^{n \times m}$, a lownorm matrix $\mathbf{R} \in \ZZ^{m \times m}$,





@ 2,13 +2,13 @@


% \section{PairingBased Cryptography} %


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%




Pairingbased cryptography was introduced by Antoine Joux~\cite{Jou00} to generalize DiffieHellman key exchange to three users in one round.


Since then, many constructions have been proposed for cryptographic constructions, such as identitybased encryption~\cite{BF01,Wat05} or group signature~\cite{BBS04}.


Pairingbased cryptography was introduced by Sakai, Ohgishi and Kasahara~\cite{SOK00} to generalize DiffieHellman key exchange to three users in one round.


Since then, many constructions have been proposed for cryptographic constructions, such as identitybased encryption~\cite{BF01,Wat05} or group signatures~\cite{BBS04}.


Multiple constructions and parameter sets coexist for pairings.


Realworld implementation are based on elliptic curves~\cite{BN06, KSS08}, but recent advances in cryptanalysis makes it hard to evaluate the security level of pairingbased cryptography~\cite{KB16,MSS17,BD18}.


Realworld implementation are based on elliptic curves~\cite{BN06, KSS08}, but recent advances in cryptanalysis requires to reassess the security level of pairingbased cryptography~\cite{KB16,MSS17,BD18}.




In the following, we rely on the blackbox definition of cryptographic pairings as bilinear maps, and on the assumed hardness of classical constantsize assumptions over pairings, namely $\SXDH$ and $\SDL$.


The notations $1_{\GG}^{}$, $1_{\Gh}^{}$ and $1_{\GT}^{}$ denote the unit element in $\GG$, $\Gh$ and $\GT$ respectively.


In the following, we adopt blackbox definitions of cryptographic pairings as bilinear maps, and on the assumed hardness of classical constantsize assumptions over pairingfriendly groups, namely $\SXDH$ and $\SDL$.


The notations $1_{\GG}^{}$, $1_{\Gh}^{}$ and $1_{\GT}^{}$ denote the identity elements in $\GG$, $\Gh$ and $\GT$ respectively.




\begin{restatable}[Pairings~\cite{BSS05}]{definition}{defPairings} \label{de:pairings} \index{Pairings}


A pairing is a map $e: \GG \times \Gh \to \GT$ over cyclic groups of order $p$ that verifies the following properties for any $g \in \GG, \hat{g} \in \Gh$:



@ 19,14 +19,11 @@ The notations $1_{\GG}^{}$, $1_{\Gh}^{}$ and $1_{\GT}^{}$ denote the unit elemen


\end{enumerate}


\end{restatable}




For cryptographic purpose, pairings are usually defined over elliptic curves, hence $\GT$ is a multiplicative subgroup of the multiplicative group of a finite field.


For cryptographic purposes, pairings are usually defined over elliptic curves, hence $\GT$ is a multiplicative subgroup of the multiplicative group of a finite field.




The most standard assumptions over pairings are derived from the equivalent of the DiffieHellman assumptions from cyclic groups,


described in \cref{de:DDH} and recalled here.




\defDDH*




This hypothesis, from which the DiffieHellman key exchange relies its security on, is then used to defined the $\SXDH$ assumption.


described in \cref{de:DDH}.


This hypothesis is used to defined the $\SXDH$ assumption~\cite{Sco02} as follows.




\begin{restatable}[{$\SXDH$~\cite[As.~1]{BGdMM05}}]{definition}{defSXDH} \index{Pairings!SXDH} \label{de:SXDH}


The \emph{Symmetric eXternal DiffieHellman} ($\SXDH$) assumption holds if the $\DDH$ assumption holds both in $\GG$ and $\Gh$.



@ 34,13 +31,13 @@ This hypothesis, from which the DiffieHellman key exchange relies its security




The advantages of the best $\ppt$ adversary against $\DDH$ in group $\GG$ and $\Gh$ are written $\advantage{\DDH}{\GG}$ and $\advantage{\DDH}{\Gh}$ respectively. Both of those quantities are assumed negligible under the $\SXDH$ assumption.




In \cref{ch:sigmasig}, the security of the group signature scheme relies on the $\SXDH$ assumption, which is a wellstudied assumption.


Moreover, this assumption is static, meaning that the size of the assumption is independent of any parameters, and is noninteractive, in the sense that it does not involve any oracle.


In \cref{ch:sigmasig}, the security of our group signature scheme relies on the $\SXDH$ assumption, which is a wellstudied assumption.


Moreover, this assumption is static, meaning that the size of the assumption is independent of the number of queries made py the adversary or any feature (e.g., the maximal number of users) of the system, and is noninteractive, in the sense that it does not involve any oracle.




This gives a stronger security guarantee for the security of schemes proven under this kind of assumptions.


For instance, Cheon gave an attack against $q$Strong DiffieHellmann problem for large values of $q$~\cite{Che06} (which usually represents the number of adversarial queries).


This gives us stronger confidente in the security of schemes proven under this kind of assumptions.


For instance, Cheon gave an attack against the $q$Strong DiffieHellmann problem for large values of $q$~\cite{Che06} (which usually represents the number of adversarial queries).




In the aforementioned chapter, we also rely on the following assumption, which generalizes the Discrete Logarithm problem to asymmetric groups.


In \cref{ch:sigmasig}, we also rely on the following assumption, which generalizes the Discrete Logarithm problem to asymmetric groups.




\begin{restatable}[$\SDL$]{definition}{defSDL}


\label{de:SDL} \index{Pairings!SDL}



@ 49,4 +46,4 @@ In the aforementioned chapter, we also rely on the following assumption, which g


where $a \sample \ZZ_p^{}$, computing $a \in \ZZ_p^{}$.


\end{restatable}




This assumption is also static and noninteractive.


Like $\SXDH$, this assumption is also static (i.e., constantsize) and noninteractive.




Loading…
Reference in New Issue