Update
This commit is contained in:
@ -90,7 +90,7 @@ That's why we'll now define the principle of polynomial time reduction.
|
||||
\begin{figure}
|
||||
\centering
|
||||
\input fig-poly-red
|
||||
\caption{Illustration of a polynomial-time reduction~{\cite[Fig. 2.1]{AB09}}.} \label{fig:poly-reduction}
|
||||
\caption{Illustration of a polynomial-time reduction from $A$ to $B$~{\cite[Fig. 2.1]{AB09}}.} \label{fig:poly-reduction}
|
||||
\end{figure}
|
||||
|
||||
In other words, a polynomial reduction from $A$ to $B$ is the description of a polynomial time algorithm (also called ``\emph{the reduction}''), that uses an algorithm for $B$ in a black-box manner to solve $A$.
|
||||
@ -105,13 +105,15 @@ an attack is successful if the probability that it succeed is noticeable.
|
||||
|
||||
\index{Negligible function}
|
||||
\begin{definition}[Negligible, noticeable, overwhelming probability] \label{de:negligible}
|
||||
\index{Probability!Negligible} \index{Probability!Noticeable} \index{Probability!Overwhelming}
|
||||
Let $f : \NN \to [0,1]$ be a function. The function $f$ is said to be \emph{negligible} if $f(n) = n^{-\omega(1)}_{}$, and this is written $f(n) = \negl[n]$.\\
|
||||
Non-negligible functions are also called \emph{noticeable} functions.\\
|
||||
Finally, if $f = 1- \negl[n]$, $f$ is said to be \emph{overwhelming}.
|
||||
\end{definition}
|
||||
|
||||
Once that we define the notions related to the core of the proof, we have to define the objects on what we work on.
|
||||
Namely, defining what we want to prove, and the hypotheses on which we rely, also called ``hardness assumption''.
|
||||
Once we defined these notions related to the core of the proof, we have to define the objects on what we work on.
|
||||
Namely, defining \textit{what we want to prove}, and the hypotheses on which we rely, also called ``\textit{hardness assumption}''.
|
||||
\index{Hardness assumptions}
|
||||
|
||||
The details of the hardness assumptions we use are given in Chapter~\ref{ch:structures}.
|
||||
Nevertheless, some notions are common to these and are evoked here.
|
||||
@ -127,7 +129,7 @@ To illustrate this, let us consider the two following assumptions:
|
||||
The \emph{discrete algorithm problem} is defined as follows. Let $(\GG, \cdot)$ be a cyclic group of order $p$.
|
||||
Given $g,h \in \GG$, the goal is to find an integer $a \in \Zp^{}$ such that: $g^a_{} = h$.
|
||||
|
||||
The \textit{discrete logarithm assumption} is the intractability of this problem.
|
||||
The \textit{discrete logarithm assumption} is the intractability of this problem for any \ppt{} algorithm with noticeable probability.
|
||||
\end{definition}
|
||||
|
||||
\begin{restatable}[Decisional Diffie-Hellman]{definition}{defDDH}
|
||||
@ -168,7 +170,7 @@ For instance, cryptographic hash functions enjoy several different associated se
|
||||
The weakest is the collision resistance, that states that it is intractable to find two strings that maps to the same digest.
|
||||
A stronger notion is the second pre-image resistance, that states that given $x \in \bit^\star_{}$, it is not possible for a $\ppt$ algorithm to find $\tilde{x} \in \bit^\star_{}$ such that $h(x) = h(\tilde{x})$.
|
||||
Similarly to what we saw in the previous section about $\DDH$ and $\DLP$, we can see that collision resistance implies second pre-image resistance.
|
||||
Indeed, if there is an attacker against second pre-image, then one can choose a string $x \in \bit^\star_{}$ and obtains from this attacker a second string $\tilde{x} \in \bit^\star_{}$ such that $h(x) = h(\tilde{x})$. So a hash function that is collision resistant is also second pre-image resistant.
|
||||
Indeed, if there is an attacker against second pre-image, then one can choose a string $x \in \bit^\star_{}$ and obtains from this attacker another string $\tilde{x} \neq x \in \bit^\star_{}$ such that $h(x) = h(\tilde{x})$. So a hash function that is collision resistant is also second pre-image resistant.
|
||||
|
||||
\index{Random Oracle Model}
|
||||
The \textit{random oracle model}~\cite{FS86,BR93}, or \ROM, is an idealized security model where hash functions are assumed to behave as a truly random function.
|
||||
|
Reference in New Issue
Block a user