Chap 2 corrected
This commit is contained in:
parent
ec61d1a19f
commit
71e9abec4c
115
chap-proofs.tex
115
chap-proofs.tex
@ -106,6 +106,18 @@ Until know, we mainly focus on the running time of the algorithms.
|
||||
In cryptology, it is also important to consider the success probability of algorithms:
|
||||
an attack is successful if the probability that it succeed is noticeable.
|
||||
|
||||
\index{Landau notations}
|
||||
\begin{definition}[Landau notations]
|
||||
Let $f,g$ be two functions from $\NN$ to $\RR$. Let us define the so-called \textit{Landau notations} to asymptotically compare functions.
|
||||
\begin{description}
|
||||
\item[$f$ is bounded by $g$:] $f(x) = \bigO(g(x))$ if there exists a constant $k>0$ such that $|f(n)| \leq k \cdot |g(n)|$ eventually.
|
||||
\item[$f$ is not dominated by $g$:] $f(x) = \Omega(g(x))$ if there exists a constant $k>0$ such that $|f(n)| \geq k \cdot |g(n)|$ eventually.
|
||||
\item[$f$ is bounded by $g$ from above and below:] $f(x) = \Theta(g(x))$ if $f(x) = \bigO(g(x))$ and $f(x) = \Omega(g(x))$.
|
||||
\item[$g$ dominates $f$:] $f(x) = o(g(x))$ if for any $k > 0$, $f(n) \geq k \cdot |g(n)|$ eventually.
|
||||
\item[$f$ dominates $g$:] $f(x) = \omega(g(x))$ if for any $k > 0$, $|f(n)| > k \cdot |g(n)|$ eventually.
|
||||
\end{description}
|
||||
\end{definition}
|
||||
|
||||
\index{Negligible function}
|
||||
\begin{definition}[Negligible, noticeable, overwhelming probability] \label{de:negligible}
|
||||
\index{Probability!Negligible} \index{Probability!Noticeable} \index{Probability!Overwhelming}
|
||||
@ -122,8 +134,7 @@ The former are the statements we need to prove, and the latter are the hypothese
|
||||
The details of the hardness assumptions we use are given in Chapter~\ref{ch:structures}.
|
||||
Nevertheless, some notions are common to these and are evoked here.
|
||||
|
||||
The confidence one can put in a hardness assumption depends on many criteria.
|
||||
|
||||
The confidence one can put in a hardness assumption depends on many criteria.
|
||||
First of all, a weaker assumption is preferred to a stronger one.
|
||||
To illustrate this, let us consider the two following assumptions:
|
||||
|
||||
@ -136,16 +147,34 @@ To illustrate this, let us consider the two following assumptions:
|
||||
The \textit{discrete logarithm assumption} is the intractability of this problem for any \ppt{} algorithm with noticeable probability.
|
||||
\end{definition}
|
||||
|
||||
\begin{definition}[Indistinguishability] \label{de:indistinguishability}
|
||||
\index{Indistiguishability}
|
||||
Let $D_0$ and $D_1$ be two probabilistic distributions and $\param$ be public parameters. Let us define the following experiments $\Exp{\mathrm{Dist}}{\ddv, 0}$ and $\Exp{\mathrm{Dist}}{\ddv, 1}$ for any algorithm $\ddv$:
|
||||
\begin{center}
|
||||
\fbox{\procedure{$\Exp{\mathrm{Dist}}{\ddv, b}(\lambda)$}{%
|
||||
x \sample D_b\\
|
||||
b' \gets \ddv(1^\lambda, \param, x)\\
|
||||
\pcreturn b'
|
||||
}}
|
||||
\end{center}
|
||||
The advantage of an adversary $\ddv$ for this game is defined as
|
||||
\[ \advantage{\mathrm{Dist}}{\ddv}(\lambda) \triangleq \left| \Pr\left[ \Exp{\mathrm{Dist}}{\ddv, 1}(\lambda) = 1\right] - \Pr\left[ \Exp{\mathrm{Dist}}{\ddv, 0}(\lambda) = 1 \right] \right|. \]
|
||||
|
||||
A $\ppt$ algorithm which has a noticeable advantage for the above experiments is called a \textit{distinguisher} between $D_0$ and $D_1$.
|
||||
|
||||
Two distributions $D_0$ and $D_1$ are \textit{computationally indistinguishable} if there does not exist any $\ppt$ distinguisher between those two distributions.
|
||||
\end{definition}
|
||||
|
||||
\begin{restatable}[Decisional Diffie-Hellman]{definition}{defDDH}
|
||||
\index{Discrete Logarithm!Decisional Diffie-Hellman} \label{de:DDH}
|
||||
Let $\GG$ be a cyclic group of order $p$. The \emph{decisional Diffie-Hellman} ($\DDH$) problem is the following.
|
||||
Given the tuple $\bigl(g, g_1^{}, g_2^{}, g_3^{}\bigr) = \bigl(g, g^a_{}, g^b{}, g^c_{}\bigr) \in \GG^4_{}$, the goal is to decide whether $c = ab$ or $c$ is sampled uniformly in $\GG$.
|
||||
Let $\GG$ be a cyclic group of order $p$. The \emph{decisional Diffie-Hellman} ($\DDH$) distribution is
|
||||
\[\mathsf{D}_{\DDH} \triangleq \{ (g, g^a, g^b, g^{ab}) \mid g \sample \U(\GG), a,b \sample \U(\ZZ_p) \}.\]
|
||||
|
||||
The \textit{\DDH assumption} is the intractability of the problem for any $\ppt$ algorithm.
|
||||
The \textit{\DDH assumption} states that the distributions $\mathsf{D}_{\DDH}$ and $\U(\GG^4)$ are computationally indistinguishable given the public parameter $\GG$ (the description of the group).
|
||||
\end{restatable}
|
||||
|
||||
The discrete logarithm assumption is implied by the decisional Diffie-Hellman assumption for instance.
|
||||
This is why it is preferable to work with the discrete logarithm assumption if it is possible.
|
||||
This is why it is preferable to work with the discrete logarithm assumption when it is possible.
|
||||
For instance, there is no security proofs for the El Gamal encryption scheme from DLP.
|
||||
|
||||
Another criterion to evaluate the security of an assumption is to look if the assumption is ``simple to state'' or not.
|
||||
@ -205,12 +234,12 @@ The following section explains how to define the security of a cryptographic pri
|
||||
\section{Security Games and Simulation-Based Security} \label{se:games-sim}
|
||||
\addcontentsline{tof}{section}{\protect\numberline{\thesection} Preuves par jeux et preuves par simulation}
|
||||
|
||||
Up to now, we defined the structure on which security proofs works. Let us now define what we are proving.
|
||||
An example of what we are proving has been shown in Section~\ref{se:models} with cryptographic hash functions.
|
||||
%Up to now, we defined the structure on which security proofs works. Let us now define what we are proving.
|
||||
%An example of what we are proving has been shown in Section~\ref{se:models} with cryptographic hash functions.
|
||||
|
||||
In order to define security properties, a common manner is to define security \emph{games} (or \emph{experiments})~\cite{GM84,Sho06}.
|
||||
In order to define security properties, a common manner is to define security \emph{games} (or \emph{experiments})~\cite{GM82,Sho06}.
|
||||
|
||||
Two examples of security game are given in Figure~\ref{fig:sec-game-examples}: the \emph{indistinguishability under chosen-plaintext attacks} (\indcpa) for public-key encryption (\PKE) schemes and the \emph{existential unforgeability under chosen message attacks} (EU-CMA) for signature schemes.
|
||||
Two examples of security game are given in Figure~\ref{fig:sec-game-examples}: to formalize the notions of \emph{indistinguishability under chosen-plaintext attacks} (\indcpa) for public-key encryption (\PKE) schemes and \emph{existential unforgeability under chosen message attacks} (EU-CMA) for signature schemes.
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
@ -238,7 +267,7 @@ Two examples of security game are given in Figure~\ref{fig:sec-game-examples}: t
|
||||
\end{figure}
|
||||
|
||||
\index{Reduction!Advantage} \index{Encryption!IND-CPA}
|
||||
The \indcpa{} game is an \emph{indistinguishability} game. Meaning that the goal for the adversary $\mathcal{A}$ against this game is to distinguish between two messages from different distributions.
|
||||
\indcpa{} security is modeled by an \emph{indistinguishability} game, meaning that the goal for the adversary $\adv$ against this game is to distinguish between two messages from different distributions.
|
||||
To model this, for any adversary $\adv$, we define a notion of \emph{advantage} for the $\indcpa$ game as
|
||||
\[
|
||||
\advantage{\indcpa}{\adv}(\lambda)
|
||||
@ -246,15 +275,15 @@ To model this, for any adversary $\adv$, we define a notion of \emph{advantage}
|
||||
\left| \Pr\left[ \Exp{\indcpa}{\adv,1}(\lambda) = 1 \right] - \Pr\left[ \Exp{\indcpa}{\adv, 0}(\lambda) = 1\right] \right|.
|
||||
\]
|
||||
|
||||
We say that a $\PKE$ scheme is $\indcpa$ if for any $\ppt$ $\adv$, the advantage of $\mathcal{A}$ in the $\indcpa$ game is negligible with respect to $\lambda$.
|
||||
We say that a $\PKE$ scheme is $\indcpa$ if, for any $\ppt$ $\adv$, the advantage of $\mathcal{A}$ in the $\indcpa$ game is negligible with respect to $\lambda$.
|
||||
|
||||
This definition of advantages models the fact that the adversary is unable to distinguish whether the ciphertext $\mathsf{ct}$ comes from the experiment $\Exp{\indcpa}{\adv, 0}$ or the experiment $\Exp{\indcpa}{\adv, 1}$.
|
||||
Which means that the adversary cannot get a single bit of information about the ciphertext.
|
||||
This definition of advantages models that the adversary is unable to distinguish whether the ciphertext $\mathsf{ct}$ comes from the experiment $\Exp{\indcpa}{\adv, 0}$ or the experiment $\Exp{\indcpa}{\adv, 1}$.
|
||||
As a consequence, the adversary cannot get a single bit of information about the ciphertext.
|
||||
|
||||
This kind of definition are also useful to model anonymity.
|
||||
This kind of definition is also useful to model anonymity.
|
||||
For instance in \cref{sec:RGSdefsecAnon}, the definition of anonymity for group signatures is defined in a similar fashion (\cref{def:anon}).
|
||||
|
||||
To manipulate indistinguishability between distributions, it is useful to quantify the distance between two distributions.
|
||||
To handle indistinguishability between distributions, it is useful to quantify the distance between two distributions.
|
||||
In this context, we define the statistical distance as follows.
|
||||
|
||||
\begin{definition}[Statistical Distance] \index{Probability!Statistical Distance}
|
||||
@ -263,9 +292,8 @@ In this context, we define the statistical distance as follows.
|
||||
\end{definition}
|
||||
|
||||
Two distributions are \textit{statistically close} if their statistical distance is negligible with respect to the security parameter.
|
||||
|
||||
It is worth noticing that if two distributions are statistically close, then the advantage of an adversary in distinguishing between them is negligible.
|
||||
Another property used in the so-called \textit{hybrid argument}\index{Hybrid argument} is the \textit{triangular equality} that follows from the fact that the statistical distance is a distance.
|
||||
%Another property used in the so-called \textit{hybrid argument}\index{Hybrid argument} is the \textit{triangular equality} that follows from the fact that the statistical distance is a distance.
|
||||
|
||||
Another interesting metric, that will be used in the security proof of %TODO
|
||||
is the Rényi Divergence:
|
||||
@ -276,20 +304,19 @@ is the Rényi Divergence:
|
||||
\Supp(Q)$, and $a \in ]1, +\infty[$, we define the \emph{R\'enyi divergence} of order $a$ by:
|
||||
\[ R_a(P||Q) = \left( \sum_{x \in \Supp(P)} \frac{P(x)^a}{Q(x)^{a-1}} \right)^{\frac{1}{a-1}}. \]
|
||||
|
||||
We define the R\'enyi divergences of orders $1$ and $+\infty$ by:
|
||||
We define the R\'enyi divergences of orders $1$ and $+\infty$ as:
|
||||
|
||||
\[ R_1(P||Q) = \exp\left( \sum_{x \in \Supp(P)} P(x) \log \frac{P(x)}{Q(x)} \right) \mbox{ and } R_\infty (P||Q) = \max_{x \in \Supp(P)} \frac{P(x)}{Q(x)}. \]
|
||||
|
||||
The divergence $R_1$ is the (exponential) of the Kullback-Leibler divergence.
|
||||
\end{restatable}
|
||||
|
||||
Bai, Langlois, Lepoint, Stehlé and Steinfeld~\cite{BLL+15} noticed that the Rényi Divergence has similar property with respect to multiplication, and can be useful in the context of unforgeability game as we will explain it in the following paragraph. Prest further presented multiple uses of the Rényi Divergence in~\cite{Pre17}.
|
||||
Bai, Langlois, Lepoint, Stehlé and Steinfeld~\cite{BLL+15} observed that the Rényi Divergence has a property similar to the \textit{triangular inequality} with respect to multiplication, and can be useful in the context of unforgeability game as we will explain it in the following paragraph. Prest further presented multiple uses of the Rényi Divergence in~\cite{Pre17}.
|
||||
|
||||
We notice that security definitions for signature scheme are not indistinguishability-based experiments, but search experiments (i.e., the adversary has to output a string rather than distinguishing between two experiments by outputting a single bit).
|
||||
The goal of the adversary is not to distinguish between two distributions, but to forge a new signature from what it learns \emph{via} signature queries.
|
||||
|
||||
We can notice that security definitions for signature scheme are no more indistinguishability-based games, but unforgeability games.
|
||||
The goal of the adversary is no more to distinguish between two distributions, but to forge a new signature from what it learns \emph{via} signature queries.
|
||||
|
||||
Those signature queries are provided by an oracle \oracle{sign}{sk,\cdot}, which on input $m$ returns the signature $\sigma = \Sigma.\mathsf{sign}(sk, m)$ and add $\sigma$ to $\ensemble{sign}$. The initialization of these sets and the behaviour of oracle may be omitted in the rest of this thesis for the sake of readability.
|
||||
Those signature queries are handled by an oracle \oracle{sign}{sk,\cdot}, which on input $m$ returns the signature $\sigma = \Sigma.\mathsf{sign}(sk, m)$ and add $\sigma$ to $\ensemble{sign}$. The initialization of these sets and the oracle's behavior may be omitted in the rest of this thesis for the sake of readability.
|
||||
|
||||
\index{Signatures!EU-CMA}
|
||||
For EU-CMA, the advantage of an adversary $\adv$ is defined as
|
||||
@ -299,11 +326,13 @@ For EU-CMA, the advantage of an adversary $\adv$ is defined as
|
||||
\Pr\left[ \Sigma.\mathsf{verif}(vk, m^\star, \sigma^\star) = \top~\land~ \sigma^\star \notin \ensemble{sign} \right].
|
||||
\]
|
||||
|
||||
And a signature scheme is considered unforgeable under chosen message attacks if for any $\ppt$ adversary $\adv$, the advantage of $\adv$ is negligible with respect to $\lambda$.
|
||||
A signature scheme is considered unforgeable under chosen message attacks if, for any $\ppt$ adversary $\adv$, the advantage of $\adv$ is negligible with respect to $\lambda$.
|
||||
|
||||
This means that within reasonable expected time, no one can create a new valid signature without the signing key ($sk$). This kind of definitions are mostly used in the case of authentication primitives.
|
||||
To follow the example of group signatures in Part~\ref{pa:gs-ac}, the \emph{security against misidentification attacks} (or \emph{traceability}) experiment follow the same structure.
|
||||
This security notion illustrates that no malicious collusions of users and the authority that delivers the secret keys can provide valid signatures that opens on an honest user, or does not open to a valid registered user.
|
||||
This means that, within reasonable expected time\footnote{Reasonable time may have multiple definitions, in the context of theoretical cryptography, we assume that quasi-polynomial time is the upper bound of reasonable.}, no adversary can create a new valid signature without the signing key ($sk$). This kind of definitions are often used in the case of authentication primitives.
|
||||
In our example of group signatures in Part~\ref{pa:gs-ac}, the \emph{security against misidentification attacks} (or \emph{traceability}) experiment follows the same structure.
|
||||
This security notion illustrates that
|
||||
no collusion between malicious users and the group authority can create valid signatures
|
||||
that open on an honest user, or do not open to a valid registered user.
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
@ -311,27 +340,29 @@ This security notion illustrates that no malicious collusions of users and the a
|
||||
\caption{Simulation-based cryptography.} \label{fig:sim-crypto}
|
||||
\end{figure}
|
||||
|
||||
The security definition of $\indcpa$ is defined as an indistinguishability game.
|
||||
The first security definition for $\PKE$ was although a simulation-based definition~\cite{GM84}.
|
||||
The security definition of $\indcpa$ is defined via an indistinguishability experiment.
|
||||
The first security definition for $\PKE$ was nevertheless a simulation-based definition~\cite{GM82}.
|
||||
In this context, instead of distinguishing between two messages, the goal is to distinguish between two different environments.
|
||||
\index{Universal Composability}
|
||||
In the following we will use the \emph{Real world}/\emph{Ideal world} paradigm~\cite{Can01} to describe those different environments.
|
||||
Namely, for $\PKE$, it means that for any $\ppt$ adversary~$\widehat{\adv}$ --\,in the \emph{Real world}\,-- that interacts with a challenger $\cdv$
|
||||
In the following, we will use the \emph{Real world}/\emph{Ideal world} paradigm~\cite{Can01} to describe those different environments.
|
||||
Namely, for $\PKE$, it means that, for any $\ppt$ adversary~$\widehat{\adv}$ --\,in the \emph{Real world}\,-- that, interacts with a challenger $\cdv$,
|
||||
there exists a $\ppt$ \emph{simulator} $\widehat{\adv}'$ --\,in the \emph{Ideal world}\,-- that interacts with the same challenger $\cdv'$ with the difference that the functionality $F$ is replaced by a trusted third party in the \emph{Ideal word}.
|
||||
|
||||
In other words, it means that the information that $\widehat{\adv}$ obtains from its interaction with the challenger $\cdv$ does not allow $\widehat{\adv}$ to do more things that what it can do with blackbox accesses to the functionality.
|
||||
In other words, it means that the information that $\widehat{\adv}$ obtains from its interaction with the challenger $\cdv$
|
||||
does not allow $\adv$ to lean any more information than it does via black-box access to the functionality.
|
||||
|
||||
In the context of $\PKE$, the functionality is the access to the public key $pk$ as described in Line 2 of $\Exp{\indcpa}{\adv, b}(\lambda)$.
|
||||
Therefore, the existence of a simulator $\widehat{\adv}$ that does not use $pk$ shows that $\mathcal{A}$ does not learn anything from $pk$.
|
||||
|
||||
For $\PKE$, the simulation-based definition for chosen plaintext security is the same as the indistinguishability security~\cite[Se. 5.2.3]{Gol04}.
|
||||
As indistinguishability based model are easier to manipulate, that's why this is the most common definition for security against chosen plaintext attacks for $\PKE$.
|
||||
For other primitives, such as Oblivious Transfer ($\OT$) described in Chapter~\ref{ch:ac-ot}, the simulation-based definitions are strictly stronger than indistinguishability definitions~\cite{CF01}.
|
||||
For $\PKE$, the simulation-based definition for chosen-plaintext security is equivalent to the indistinguishability security~\cite[Se. 5.2.3]{Gol04}, even if the two security definitions are conceptually different.
|
||||
As indistinguishability-based model are often easier to work with, they are more commonly used to prove security of $\PKE$ schemes.
|
||||
For other primitives, such as Oblivious Transfer ($\OT$) described in \cref{ch:ac-ot}, the simulation-based definitions are strictly stronger than indistinguishability definitions~\cite{NP99}.
|
||||
Therefore, it is preferable to have security proofs of the strongest \emph{possible} definitions in theoretical cryptography.
|
||||
|
||||
Even though, the question of which security model is the strongest remains a complex one, as it depends on many parameters. If some security models implies others, it's not necessary always the case. For instance, we know from the work of Canetti and Fischlin~\cite{CF01} that it is impossible to construct a $\UC$-secure bit commitment scheme\footnote{The definition of a commitment scheme is given in~\cref{de:commitment}. To put it short, it is the digital equivalent of a safe.} in the plain model, while the design of such a primitive is possible assuming a \textit{trusted setup}.
|
||||
Hence, the question of quantifying if a standard-model commitment scheme has a stronger security than an UC commitment scheme in the trusted setup setting under similar assumptions is not a trivial question. The answer mainly depends on the manner the scheme will be used as well as the adversarial model.
|
||||
|
||||
\begin{definition}[The CRS model] \label{de:trusted-setup} \index{Universal Composability!Common Reference String}
|
||||
In the \textit{trusted setup} model or \textit{common reference string} (\textsf{CRS}) model, all the participants are assumed to have access to a common string $\crs \in \{0,1\}^\star$ that is drawn from some specific distribution $D_\crs$.
|
||||
\end{definition}
|
||||
Even though, the question of which security model is the strongest remains a complex one, as it depends on many parameters:
|
||||
the answer mainly depends on the manner the scheme will be used as well as the adversarial model.
|
||||
%If some security models implies others, it's not necessary always the case.
|
||||
For example, we know from the work of Canetti and Fischlin~\cite{CF01} that it is impossible to construct a $\UC$-secure bit commitment scheme\footnote{The definition of a commitment scheme is given in~\cref{de:commitment}. To put it short, it is the digital equivalent of a safe.} in the plain model, while the design of such a primitive is possible assuming a \textit{trusted setup}.
|
||||
%Hence, the question of quantifying if a standard-model commitment scheme has a stronger security than an UC commitment scheme in the trusted setup setting under similar assumptions is not a trivial question.
|
||||
\index{Universal Composability!Common Reference String}
|
||||
In the \textit{trusted setup} model or \textit{common reference string} (\textsf{CRS}) model, all the participants are assumed to have access to a common string $\crs \in \{0,1\}^\star$ that is drawn from some specific distribution $D_\crs$.
|
||||
|
Loading…
Reference in New Issue
Block a user