+ Paragraphs

This commit is contained in:
Fabrice Mouhartem 2018-02-08 12:36:02 +01:00
parent 57e0ca863f
commit bac067e5e1

View File

@ -23,7 +23,7 @@ The name ``reduction'' comes from computational complexity.
In this field of computer science, research focuses on defining equivalence classes for problems, based on the necessary amount of resources to solve them.
In order to define lower bound for the complexity of some problems, a classical way of doing this is to provide a construction that goes from an instance of a problem $A$ to an instance of problem $B$ such that if a solution of $B$ is found, then so is a solution of $A$ as well.
This amounts to say that problem $B$ is at least as hard as problem $A$ up to the complexity of the transformation.
For instance, Cook shown that satisfiability of boolean formulas is at least as hard as every problem in $\NP$~\cite{Coo71} up to a polynomial-time transformation.
For instance, Cook shown that satisfiability of Boolean formulas is at least as hard as every problem in $\NP$~\cite{Coo71} up to a polynomial-time transformation.
Let us now define more formally the notions of reduction and computability using the computational model of Turing machines.
@ -102,12 +102,15 @@ In cryptology, it is also important to consider the success probability of algor
an attack is successful if the probability that it succeed is noticeable.
\index{Negligible function}
\scbf{Notation.} Let $f : \NN \to [0,1]$ be a function. The function $f$ is called \emph{negligible} if $f(n) = n^{-\omega(1)}$, and this is written $f(n) = \negl[n]$. Non-negligible functions are called \emph{noticeable} functions. And if $f = 1- \negl[n]$, $f$ is called \emph{overwhelming}.
\scbf{Notation.} Let $f : \NN \to [0,1]$ be a function. The function $f$ is called \emph{negligible} if $f(n) = n^{-\omega(1)}_{}$, and this is written $f(n) = \negl[n]$.
Non-negligible functions are called \emph{noticeable} functions.
And if $f = 1- \negl[n]$, $f$ is called \emph{overwhelming}.
Once that we define the notions related to the core of the proof, we have to define the objects on what we work on.
Namely, defining what we want to prove, and the hypotheses on which we rely, also called ``hardness assumption''.
The details of the hardness assumptions we use are given in Chapter~\ref{chap:structures}. Nevertheless, some notions are common to these and are evoked here.
The details of the hardness assumptions we use are given in Chapter~\ref{chap:structures}.
Nevertheless, some notions are common to these and are evoked here.
The amount of confidence one can put in a hardness assumption is given by many criteria.
@ -118,22 +121,68 @@ To illustrate this, let us consider the two following assumptions:
\index{Discrete Logarithm!Assumption}
\index{Discrete Logarithm!Problem}
The \emph{discrete algorithm problem} is defined as follows. Let $(\GG, \cdot)$ be a cyclic group of order $p$.
Given $g,h \in \GG$, the goal is to find an integer $a \in \Zp$ such that: $g^a = h$.
Given $g,h \in \GG$, the goal is to find an integer $a \in \Zp^{}$ such that: $g^a_{} = h$.
The \textit{discrete logarithm assumption} is the intractability of this problem.
\end{definition}
\begin{definition}[Decisional Diffie Hellman] \label{de:DDH} \index{Discrete Logarithm!Decisional Diffie-Hellman}
\begin{definition}[Decisional Diffie-Hellman] \label{de:DDH} \index{Discrete Logarithm!Decisional Diffie-Hellman}
Let $\GG$ be a cyclic group of order $p$. The \emph{decisional Diffie-Hellman} ($\DDH$) problem is the following.
Given $(g, g_1, g_2, g_3) = (g, g^a, g^b, g^c) \in \GG^4$, the goal is to decide if $c = ab$ or if $c$ is sampled uniformly in $\GG$.
Given the tuple $(g, g_1^{}, g_2^{}, g_3^{}) = (g, g^a_{}, g^b{}, g^c_{}) \in \GG^4_{}$, the goal is to decide whether $c = ab$ or $c$ is sampled uniformly in $\GG$.
The \textit{\DDH assumption} is the intractability of the problem for any $\ppt$ algorithm.
\end{definition}
The discrete logarithm assumption is implied by the decisional Diffie-Hellman assumption for instance.
Indeed, if one is able to solve the discrete logarithm problem, then it suffices to compute the discrete logarithm of $g_1$, let say $\alpha$, and then check whether $g_2^\alpha = g_3$.
Indeed, if one is able to solve the discrete logarithm problem, then it suffices to compute the discrete logarithm of $g_1$, let say $\alpha$, and then check whether $g_2^\alpha = g_3^{}$.
This is why it is preferable to work with the discrete logarithm assumption if it is possible.
For instance, there is no security proofs for the El Gamal encryption scheme from DLP.
\section{Random-Oracle Model, Standard Model and Half-Simulatability}
Another criterion to evaluate the security of an assumption is to look if the assumption is ``simple'' or not.
It is harder to evaluate the security of an assumption as $q$-Strong Diffie-Hellman, which is a variant of $\DDH$ where the adversary is given the tuple $(g, g^a_{}, g^{a^2}_{}, \ldots, g^{a^q}_{})$ and has to devise $g^{a^{q+1}}$.
The security of this assumption inherently depends on the parameter $q$ of the assumption.
And Cheon proved that for large values of $q$, this assumption is no more trustworthy~\cite{Che06}.
These parameterized assumptions are called \emph{$q$-type assumptions}.
There are also other kind of non-static assumptions, such as interactive assumptions.
An example can be the ``\emph{$1$-more-\textsf{DL}}'' assumption.
Given oracle access to $n$ discrete logarithm queries ($n$ is not known in advance), the $1$-more-\textsf{DL} problem is to solve a $n+1$-th discrete logarithm.
The next step to study in a security proof is the \emph{security model}.
In other words, the context in which the proofs are made.
This is the topic of the next section.
\section{Random-Oracle Model and Standard Model} \label{se:models}
The most general model to do security proofs is the standard model.
In this model, nothing special is assumed, and every assumptions are explicit.
For instance, cryptographic hash functions enjoy several different associated security notions~\cite{KL07}.
The weakest is the collision resistance, that states that it is intractable to find two strings that maps to the same digest.
A stronger notion is the second pre-image resistance, that states that given $x \in \bit^\star_{}$, it is not possible for a $\ppt$ algorithm to find $\tilde{x} \in \bit^\star_{}$ such that $h(x) = h(\tilde{x})$.
Similarly to what we saw in the previous section about $\DDH$ and $\DLP$, we can see that collision resistance implies second pre-image resistance.
Indeed, if there is an attacker against second pre-image, then one can choose a string $x \in \bit^\star_{}$ and obtains from this attacker a second string $\tilde{x} \in \bit^\star_{}$ such that $h(x) = h(\tilde{x})$. So a hash function that is collision resistant is also second pre-image resistant.
\index{Random Oracle Model}
The \textit{random oracle model}~\cite{FS86,BR93}, or \ROM, is an idealized security model where hash functions are assumed to behave as a truly random function.
This implies collision resistance (if the codomain of the hash function is large enough, which should be the case for a cryptographic hash function) and other security notions related to hash functions.
In this model, hash function access are managed as oracle access (which then can be reprogrammed by the reduction).
We can notice that this security model is unrealistic~\cite{CGH04}. Let us construct a \emph{counter-example}.
Let $\pi$ be a secure signature scheme, and let $\pi_y^{}$ be the scheme that returns $\pi(m)$ as a signature if and only if $h(0) \neq y$ and $0$ as a signature otherwise.
In the \ROM $h$ behaves as a random function.
Hence, the probability that $h(0) = y$ is negligible with respect to the security parameter for any fixed $y$.
On the other hand, it appears that when $h$ is instantiated with a real world hash function, then $\pi_{h(0)}$ is completely insecure as a signature scheme. \hfill $\square$
In this context, one may wonder why is the \ROM still used in cryptographic proofs~\cite{LMPY16,LLM+16}.
One reason is that some constructions are not known to exist yet from the standard model.
One example is non-interactive zero-knowledge (\NIZK) proofs from lattice assumptions~\cite{Ste96,Lyu08}.
\NIZK proofs form an elementary building block for privacy-based cryptography, and forbid the use of the \ROM may slow down research in this direction~\cite{LLM+16}.
Another reason to use the \ROM in cryptography, is that it is a sufficient guarantee in real-world cryptography~\cite{BR93}.
The example we built earlier is artificial, and in practice there is no known attacks against the \ROM.
This consequence comes also from the fact that the \ROM is implied by the standard model.
As a consequence, constructions in the \ROM are at least as efficient as in the standard model.
Thus, for practical purpose, constructions in the \ROM are usually more efficient.
For instance, the scheme we present in Chapter~\ref{ch:sigmasig} adapts the construction of dynamic group signature in the standard model from Libert, Peters and Yung~\cite{LPY15} in the \ROM.
Doing this transform reduces the signature size from $32$ elements in $\GG$, $14$ elements in $\Gh$ and \textit{one} scalars in the standard model~\cite[App. J]{LPY15} down to $7$ elements in $\GG$ and $3$ scalars in the \ROM.