espitau 8 years ago
parent
commit
3bb8aad628
2 changed files with 190 additions and 113 deletions
  1. 17 14
      main.tex
  2. 173 99
      main.tex.bak

+ 17 - 14
main.tex

@@ -298,11 +298,11 @@ rates for fully random search with 400 iterations.
 
 \subsection{Genetic $(\mu+\lambda)$ search}
 
-The third heuristic implemented is the $(\mu+\lambda)$ genectic search.
+The third heuristic implemented is the $(\mu+\lambda)$ genetic search.
 This heuristic is inspired from the evolution of species: a family
 of $\mu$ genes is known (they are generated randomly at the beginning),
 from which $\lambda$ new genes are derived. A gene is the set of parameters
-we are optimising, i.e. the permutations.
+we are optimizing, i.e. the permutations.
 Each one is derived either from one gene applying a mutation
 (here a transposition of one of the permutations), or from two
 genes applying a crossover : a blending of both genes (the
@@ -312,12 +312,25 @@ among $\mu$ and $\lambda$. After that, only the $\mu$ best
 genes are kept, according to their fitness, and the process
 can start again.
 
-Because variating over $\mu$ or $\lambda$ does not change fundamentaly
+Because making variations over $\mu$ or $\lambda$ does not change fundamentally
 the algorithm, we have chosen to fix $\mu=\lambda=5$ once and for all,
 which seemed to be a good trade-off between the running time of
 each iteration and the size of the family.
 
 
+
+\subsubsection{Crossover algorithm}
+
+We designed a crossover for permutations. The idea is simple: given two
+permutations $A$ and $B$ of $\{1..n\}$, it constructs a new permutations
+$C$ value after value, in a random order (we use our class permutation
+for it). For each   $i$, we take either $A_i$ or $B_i$. If exactly
+one of those values is available (understand it was not already chosen)
+we choose it. If both are available, we choose randomly and we remember 
+the
+second. If both are unavailable, we choose a remembered value. The 
+flowchart of this algorithm is described in figure~\ref{cross_flow}.
+
 \begin{figure}
  \begin{mdframed}
 \includegraphics[scale=0.4]{crossover_flow.pdf}
@@ -326,19 +339,9 @@ each iteration and the size of the family.
 \end{mdframed}
 \end{figure}
 
-\subsubsection{Algorithm of crossover}
-
-We designed a crossover for permutations. The idea is simple: given two
-permutations $A$ and $B$ of $\{1..n\}$, it constructs a new permutations
-$C$ value after value, in a random order (we use our class permutation
-for it). For each indice $i$, we take either $A_i$ or $B_i$. If exactly
-one of those values is available (understand it was not already chosen)
-we choose it. If both are available, we choose randomly and we remember the
-second. If both are unavailable, we choose a remembered value.
-
 The benefits of this method are that it keeps common values of $A$ and $B$,
 the values $C_i$ are often among $\{A_i, B_i\}$ (hence $C$ is close to $A$
-and $B$), and it does not favor either the begining or the ending of
+and $B$), and it does not favor either the beginning or the ending of
 permutations.
 
 \begin{algorithm}[H]

+ 173 - 99
main.tex.bak

@@ -1,7 +1,7 @@
 % ------------------------------------------------------------------------
 \documentclass{llncs}
 \input{prelude}
-\begin{document} 
+\begin{document}
 \title{$\mbox{\EightStarBold}$ Discrepancies for generalized Halton points\\
 Comparison of three heuristics for generating points set}
 % \titlerunning{}  % abbreviated title (for running head)
@@ -29,16 +29,16 @@ Comparison of three heuristics for generating points set}
 
 
 \begin{abstract}
-  Geometric discrepancies are standard measures to quantify the irregularity of 
-  distributions. They are an important notion in numerical integration. 
-  One of the most important discrepancy notions is the so-called star 
-  discrepancy. Roughly speaking, a point set of low star discrepancy value 
+  Geometric discrepancies are standard measures to quantify the irregularity of
+  distributions. They are an important notion in numerical integration.
+  One of the most important discrepancy notions is the so-called star
+  discrepancy. Roughly speaking, a point set of low star discrepancy value
   allows for a small approximation error in quasi-Monte Carlo integration.
-  In this work we present a tool realizing the implantation of three 
-  basics heuristics for construction low discrepancy points sets 
+  In this work we present a tool realizing the implantation of three
+  basics heuristics for construction low discrepancy points sets
   in the generalized Halton model: fully random search, local search with
-  simmulated annealing and genetic $(5+5)$ search with a ad-hoc 
-  crossover function. 
+  simulated annealing and genetic $(5+5)$ search with a ad-hoc
+  crossover function.
 \end{abstract}
 
 
@@ -47,14 +47,14 @@ Comparison of three heuristics for generating points set}
 The testing tool is aimed to be modular: it is made of independents blocks that
 are interfaced trough a scheduler. More precisely a master wrapper is written
 in Python that calls a first layer which performs the chosen heuristic. This
-layer is written in C++ for performances. The given discrepancy algorithm 
+layer is written in C++ for performances. The given discrepancy algorithm
 --- written in C --- is called when evaluations of a state is needed.
-The wrapper dispatch the computations on the multi-core architecture of 
-modern computers\footnote{for us, between 2 and 4 physical cores and 4 or 8 
+The wrapper dispatch the computations on the multi-core architecture of
+modern computers\footnote{for us, between 2 and 4 physical cores and 4 or 8
 virtual cores}. This basic architecture is described in figure~\ref{main_flow}.
-More precisely the class diagram for the unitary test layer is 
+More precisely the class diagram for the unitary test layer is
 presented in figure~\ref{class_flow}.
-Experiments were conducted on two machines: 
+Experiments were conducted on two machines:
 \begin{itemize}
   \item 2.4 GHz Intel Dual Core i5 hyper-threaded to 2.8GHz, 8 Go 1600 MHz DDR3.
   \item 2.8 GHz Intel Quad Core i7 hyper-threaded to 3.1GHz, 8 Go 1600 MHz DDR3.
@@ -72,32 +72,32 @@ Experiments were conducted on two machines:
   \label{class_flow}
 \end{figure}
 
-On these machines, some basic profiling has make clear that 
+On these machines, some basic profiling has make clear that
 the main bottleneck of the computations is hiding in the \emph{computation
-of the discrepancy}. The chosen algorithm and implantation of this 
-cost function is the DEM-algorithm~\cite{Dobkin} of 
+of the discrepancy}. The chosen algorithm and implantation of this
+cost function is the DEM-algorithm~\cite{Dobkin} of
 \emph{Magnus Wahlstr\o m}~\cite{Magnus}.\medskip
 
-All the experiments has been conducted on dimension 2,3,4 
+All the experiments has been conducted on dimension 2,3,4
 --- with a fixed Halton basis 7, 13, 29, 3 ---. Some minor tests have
-been made in order to discuss the dependency of the discrepancy and 
+been made in order to discuss the dependency of the discrepancy and
 efficiency of the heuristics with regards to the values chosen for the
-prime base. The average results remains roughly identical when taking 
+prime base. The average results remains roughly identical when taking
 changing these primes and taking them in the range [2, 100]. For such
-a reason we decided to pursue the full computations with a fixed 
+a reason we decided to pursue the full computations with a fixed
 basis.
 
 
 \subsection{Algorithmic insights}
 
-To perform an experiment we made up a 
-loop above the main algorithm which calls the chosen heuristic multiple 
+To perform an experiment we made up a
+loop above the main algorithm which calls the chosen heuristic multiple
 times in order to smooth up the results and obtain more exploitable datas.
 Then an arithmetic mean of the result is performed on the values. In addition
 extremal values are also given in order to construct error bands graphs.
 
-A flowchart of the conduct of one experiment is described in the 
-flowchart~\ref{insight_flow}. The number of iteration of the heuristic is 
+A flowchart of the conduct of one experiment is described in the
+flowchart~\ref{insight_flow}. The number of iteration of the heuristic is
 I and the number of full restart is N. The function Heuristic() correspond to
 a single step of the chosen heuristic. We now present an in-depth view of
 the implemented heuristics.
@@ -110,7 +110,7 @@ the implemented heuristics.
 \end{mdframed}
 \end{figure}
 
-Graph are presented not with the usual box plot to show the 
+Graph are presented not with the usual box plot to show the
 error bounds, but in a more graphical way with error bands. The graph
 of the mean result is included inside a band of the same color which
 represents the incertitude with regards to the values obtained.
@@ -119,14 +119,14 @@ represents the incertitude with regards to the values obtained.
 
 \subsection{Fully random search (Test case)}
  The first heuristic implemented is the random search. We generate
- random permutations, compute the corresponging sets of Halton points
+ random permutations, compute the corresponding sets of Halton points
  and select the best set with regard to its discrepancy.
- The process is wrapped up in the 
- flowchart~\ref{random_flow}. In order to generate at each step random 
+ The process is wrapped up in the
+ flowchart~\ref{random_flow}. In order to generate at each step random
  permutations, we transform them directly from the previous ones.
-  More precisely the permutation is a singleton object which have method 
+  More precisely the permutation is a singleton object which have method
   random, built on the Knuth Fisher Yates shuffle. This algorithm allows
-  us to generate an uniformly chosen  permutation at each step. We recall 
+  us to generate an uniformly chosen  permutation at each step. We recall
   this fact and detail the algorithm in the following section.
 \begin{figure}
  \begin{mdframed}
@@ -139,12 +139,12 @@ represents the incertitude with regards to the values obtained.
 
   \subsubsection{The Knuth-Fisher-Yates shuffle}
 
-The Fisher–Yates shuffle is an algorithm for generating a random permutation 
-of a finite sets. The Fisher–Yates shuffle is unbiased, so that every 
-permutation is equally likely. We present here the Durstenfeld variant of 
+The Fisher–Yates shuffle is an algorithm for generating a random permutation
+of a finite sets. The Fisher–Yates shuffle is unbiased, so that every
+permutation is equally likely. We present here the Durstenfeld variant of
 the algorithm, presented by Knuth in \emph{The Art of Computer programming}
 vol. 2~\cite{Knuth}.
-The algorithm's time complexity is here $O(n)$, compared to $O(n^2)$ of 
+The algorithm's time complexity is here $O(n)$, compared to $O(n^2)$ of
 the naive implementation.
 
 \begin{algorithm}[H]
@@ -165,40 +165,40 @@ the naive implementation.
   The resulting permutation of KFY is unbiased.
 \end{lemma}
 \begin{proof}
-  Let consider the set $[1,\ldots n]$ as the vertices of a random graph 
-  constructed as the trace of the execution of the algorithm: 
+  Let consider the set $[1,\ldots n]$ as the vertices of a random graph
+  constructed as the trace of the execution of the algorithm:
   an edge $(i,j)$ exists in the graph if and only if the swap of $T[i]$ and
   $T[j]$ had been executed. This graph encodes the permutation represented by
-  $T$. To be able to encode any permutation the considered graph must be 
+  $T$. To be able to encode any permutation the considered graph must be
   connected --- in order to allow any pairs of points to be swapped ---.
-  Since by construction every points is reached by an edge, and that there 
+  Since by construction every points is reached by an edge, and that there
   exists exactly $n-1$ edges, we can conclude directly that any permutation can
-  be reached by the algorithm. Since the probability of getting a fixed graph 
-  of $n-1$ edges with every edges of degree at least one is $n!^{-1}$, the 
+  be reached by the algorithm. Since the probability of getting a fixed graph
+  of $n-1$ edges with every edges of degree at least one is $n!^{-1}$, the
   algorithm is thus unbiased.
 
 \end{proof}
 
 
 \subsubsection{Results and stability}
-We first want to analyze the dependence of the results on the number of 
-iterations of the heuristic, in order to discuss its stability. 
+We first want to analyze the dependence of the results on the number of
+iterations of the heuristic, in order to discuss its stability.
 The results are compiled in the figures~\ref{rand_iter2},~\ref{rand_iter3},
 restricted to a number of points between 80 and 180.
-We emphasize on the fact the lots of datas appears on the graphs, 
+We emphasize on the fact the lots of datas appears on the graphs,
 and the error bands representation make them a bit messy. These graphs
 were made for extensive internal experiments and parameters researches.
-The final wrap up graphs are much more lighter and only presents the best 
+The final wrap up graphs are much more lighter and only presents the best
 results obtained.
-As expected from a fully random search, the error bands are very large for 
+As expected from a fully random search, the error bands are very large for
 low number of iterations ($15\%$ of the value for 400 iterations) and tends
 to shrink with a bigger number of iterations (around $5\%$ for 1500 iterations).
 This shrinkage is a direct consequence of well known concentrations bounds
 (Chernoff and Asuma-Hoeffding).
-The average results are quite stable, they decrease progressively with 
-the growing number of iterations, but seem to get to a limits after 1000 
+The average results are quite stable, they decrease progressively with
+the growing number of iterations, but seem to get to a limits after 1000
 iterations. This value acts as a threshold for the interesting number of iterations.
-As such interesting results can be conducted with \emph{only} 1000 iterations, 
+As such interesting results can be conducted with \emph{only} 1000 iterations,
 without altering too much the quality of the set with regards to its
 discrepancy and this heuristic.
 
@@ -213,33 +213,32 @@ discrepancy and this heuristic.
 \label{rand_iter3}
 \end{figure}
 
-%# TODO sa n'est pas evolutionnaire
-\subsection{Evolutionary heuristic: Simulated annealing and local search}
-The second heuristic implemented is a randomized local search with 
-simulated annealing. This heuristic is inspired by the physical 
+\subsection{Local search with simulated annealing}
+The second heuristic implemented is a randomized local search with
+simulated annealing. This heuristic is inspired by the physical
 process of annealing in metallurgy.
-Simulated annealing interprets the physical slow cooling as a 
-slow decrease in the probability of accepting worse solutions as it 
-explores the solution space. 
+Simulated annealing interprets the physical slow cooling as a
+slow decrease in the probability of accepting worse solutions as it
+explores the solution space.
 More precisely a state is a $d$-tuple of permutations, one for each dimension,
 and the neighbourhood is the set of $d$-tuple of permutations which can be obtained
-by application of exactly one transposition of one of the permutation of 
+by application of exactly one transposition of one of the permutation of
 the current state.
 The selection phase is dependant on the current temperature:
 after applying a random transposition on one of the current permutations, either
-the discrepancy of the corresponding Halton set is decreased and the 
-evolution is kept, either it does not but is still kept with 
+the discrepancy of the corresponding Halton set is decreased and the
+evolution is kept, either it does not but is still kept with
 a probability $e^{\frac{\delta}{T}}$ where $\delta$ is the difference
 between the old and new discrepancy, and $T$ the current temperature.
 If the de discrepancy has decreased, the temperature $T$ is multiplied
-by a factor $\lambda$ (fixed to $0.992$ in all our simultations), hence
+by a factor $\lambda$ (fixed to $0.992$ in all our simulations), hence
 is decreased.
 The whole algorithm is described in the flowchart~\ref{flow_rec}.
 
 \begin{figure}
  \begin{mdframed}
 \includegraphics[scale=0.4]{flow_recuit.pdf}
-\caption{Flowchart of the simulated annealing local search heuristic}
+\caption{Flowchart of the local search with simulated annealing heuristic}
 \label{flow_rec}
 \end{mdframed}
 \end{figure}
@@ -248,14 +247,14 @@ The whole algorithm is described in the flowchart~\ref{flow_rec}.
 First experiments were made to select the best initial temperature.
 Results are compiled in graphs~\ref{temp_2},~\ref{temp3},\ref{temp3_z}.
 Graphs~\ref{temp_2},~\ref{temp3} represent the results obtained respectively
-in dimension 2 and 3 between 10 and 500 points. The curve obtained is 
-characteristic of the average evolution of the discrepancy optimization 
-algorithms for Halton points sets: a very fast decrease for low number of 
-points --- roughly up to 80 points --- and then a very slow one 
+in dimension 2 and 3 between 10 and 500 points. The curve obtained is
+characteristic of the average evolution of the discrepancy optimization
+algorithms for Halton points sets: a very fast decrease for low number of
+points --- roughly up to 80 points --- and then a very slow one
 after~\cite{Doerr}.
 The most interesting part of these results are concentrated between 80 and 160
-points were the different curves splits. The graph~\ref{temp3_z} is a zoom 
-of~\ref{temp3} in this window. We remark on that graph that the lower the 
+points were the different curves splits. The graph~\ref{temp3_z} is a zoom
+of~\ref{temp3} in this window. We remark on that graph that the lower the
 temperature is, the best the results are, with a threshold at $10^{-3}$.
 
 \begin{figure}
@@ -283,9 +282,9 @@ As for the fully random search heuristic we investigated the stability
 of the algorithm with regards to the number of iterations. We present here
 the result in dimension 3 in the graph~\ref{iter_sa}. Once again we
 restricted the window between 80 and 180 points were curves are split.
-An interesting phenomena can be observed: the error rates are somehow 
+An interesting phenomena can be observed: the error rates are somehow
 invariant w.r.t.\ the number of iterations and once again the 1000 iterations
-threshold seems to appear --- point 145 is a light split between iteration 
+threshold seems to appear --- point 145 is a light split between iteration
 1600 and the others, but excepted for that point, getting more than 1000
 iterations tends be be a waste of time. The error rate is for 80 points the
 biggest and is about $15\%$ of the value, which is similar to the error
@@ -297,7 +296,26 @@ rates for fully random search with 400 iterations.
   \label{iter_sa}
 \end{figure}
 
-\subsection{Genetic (5+5) search}
+\subsection{Genetic $(\mu+\lambda)$ search}
+
+The third heuristic implemented is the $(\mu+\lambda)$ genectic search.
+This heuristic is inspired from the evolution of species: a family
+of $\mu$ genes is known (they are generated randomly at the beginning),
+from which $\lambda$ new genes are derived. A gene is the set of parameters
+we are optimising, i.e. the permutations.
+Each one is derived either from one gene applying a mutation
+(here a transposition of one of the permutations), or from two
+genes applying a crossover : a blending of both genes (the
+algorithm is described in details further). The probability of
+making a mutation is $c$, the third parameter of the algorithm,
+among $\mu$ and $\lambda$. After that, only the $\mu$ best
+genes are kept, according to their fitness, and the process
+can start again.
+
+Because variating over $\mu$ or $\lambda$ does not change fundamentaly
+the algorithm, we have chosen to fix $\mu=\lambda=5$ once and for all,
+which seemed to be a good trade-off between the running time of
+each iteration and the size of the family.
 
 
 \begin{figure}
@@ -308,54 +326,110 @@ rates for fully random search with 400 iterations.
 \end{mdframed}
 \end{figure}
 
-\subsubsection{Dependence on the parameter p}
+\subsubsection{Algorithm of crossover}
+
+We designed a crossover for permutations. The idea is simple: given two
+permutations $A$ and $B$ of $\{1..n\}$, it constructs a new permutations
+$C$ value after value, in a random order (we use our class permutation
+for it). For each indice $i$, we take either $A_i$ or $B_i$. If exactly
+one of those values is available (understand it was not already chosen)
+we choose it. If both are available, we choose randomly and we remember the
+second. If both are unavailable, we choose a remembered value.
+
+The benefits of this method are that it keeps common values of $A$ and $B$,
+the values $C_i$ are often among $\{A_i, B_i\}$ (hence $C$ is close to $A$
+and $B$), and it does not favor either the begining or the ending of
+permutations.
+
+\begin{algorithm}[H]
+  \SetAlgoLined
+  \SetKwFunction{Rand}{Rand}
+  \SetKwFunction{RPO}{a random permutation of }
+  \SetKwFunction{RVI}{a random value in }
+  \KwData{Two permutations A[1..n], B[1..n]}
+  \KwResult{A permutation C[1..n]}
+  $pi \leftarrow \RPO \{1,\dots,n\}$\;
+  $available \leftarrow \{\}$\;
+  $got \leftarrow \{\}$\;
+  \For{$i\leftarrow 1$ \KwTo $n$}{
+     $j \leftarrow pi_i$\;
+     $a \leftarrow A_j$\;
+     $b \leftarrow B_j$\;
+     \If{$a \in got \land b \in got$}{
+        $v \leftarrow \RVI available$\;
+       }
+     \ElseIf{$a \in got$}{
+        $v \leftarrow b$\;
+       }
+     \ElseIf{$b \in got$}{
+        $v \leftarrow a$\;
+       }
+     \Else{
+        \If{$\Rand(\{0,1\}) = 1$}{
+           $v \leftarrow a$\;
+           $available \leftarrow available \cup \{b\}$\;
+          }
+        \Else{
+           $v \leftarrow b$\;
+           $available \leftarrow available \cup \{a\}$\;
+          }
+       }
+     $C_j \leftarrow v$\;
+     $got \leftarrow got \cup \{v\}$\;
+     $available \leftarrow available \setminus \{v\}$\;
+    }
+  \caption{Permutations crossover}
+\end{algorithm}
+
+
+\subsubsection{Dependence on the parameter $c$}
 First experiments were made to select the value for the crossover parameter
-p. Results are compiled in graphs~\ref{res_gen2},~\ref{res_gen2z},\ref{res_gen3}
+$c$. Results are compiled in graphs~\ref{res_gen2},~\ref{res_gen2z},\ref{res_gen3}
 and~\ref{res_gen4}.
 Graph~\ref{res_gen2}, represents the results obtained
 in dimension 2 between 10 and 500 points. The curve obtained is, with no
-surprise again, 
-the characteristic curve of the average evolution of the discrepancy we already 
+surprise again,
+the characteristic curve of the average evolution of the discrepancy we already
 saw with the previous experiments.
 The most interesting part of these results are concentrated --- once again ---
-between 80 and 160 points were the different curves splits. 
-The graph~\ref{res_gen2z} is a zoom  of~\ref{res_gen2} in this window, and 
+between 80 and 160 points were the different curves splits.
+The graph~\ref{res_gen2z} is a zoom  of~\ref{res_gen2} in this window, and
 graphs~\ref{res_gen3} and~\ref{res_gen4} are focused directly into it too.
-We remark that in dimension 2, the results are better for $p$ close to $0.5$ 
-whereas for dimension 3 and 4 the best results are obtained for $p$ closer to
+We remark that in dimension 2, the results are better for $c$ close to $0.5$
+whereas for dimension 3 and 4 the best results are obtained for $c$ closer to
 $0.1$.
 
 
 \begin{figure}
 \includegraphics[scale=0.3]{Results/res_gen_2.png}
-\caption{Dependence on parameter p: D=2}
+\caption{Dependence on parameter $c$: D=2}
   \label{res_gen2}
 \end{figure}
 
 \begin{figure}
 \includegraphics[scale=0.3]{Results/res_gen_2_zoom.png}
-\caption{Dependence on parameter p (zoom): D=2}
+\caption{Dependence on parameter $c$ (zoom): D=2}
 \label{res_gen2z}
 \end{figure}
 \begin{figure}
 \includegraphics[scale=0.3]{Results/res_gen_3_zoom.png}
-\caption{Dependence on parameter p: D=3}
+\caption{Dependence on parameter $c$: D=3}
   \label{res_gen3}
 \end{figure}
 
 \begin{figure}
 \includegraphics[scale=0.3]{Results/res_gen_4_zoom.png}
-\caption{Dependence on parameter p: D=4}
+\caption{Dependence on parameter $c$: D=4}
   \label{res_gen4}
 \end{figure}
 
 Once again we investigated the stability
-of the algorithm with regards to the number of iterations.  Once again we
+of the algorithm with regards to the number of iterations. Once again we
 restricted the window between 80 and 180 points were curves are split.
-The results are compiled in grah~\ref{gen_iter}.
+The results are compiled in graph~\ref{gen_iter}.
 An interesting phenomena can be observed: the error rates are getting really
-big for 1400 iterations at very low points (up to 120), 
-even if the average results are stables after the threshold 1000 iterations, 
+big for 1400 iterations at very low points (up to 120),
+even if the average results are stables after the threshold 1000 iterations,
 like we get before.
 
 
@@ -367,23 +441,23 @@ like we get before.
 
 \section{Results}
 Eventually we made extensive experiments to compare the three previously
-presented heuristics. The parameters chosen for the heuristics have been 
+presented heuristics. The parameters chosen for the heuristics have been
 chosen using the experiments conducted in the previous sections
-Results are compiled in the last 
-figures~\ref{wrap2},~\ref{wrap2z},~\ref{wrap3z},~\ref{wrap4z}. The 
+Results are compiled in the last
+figures~\ref{wrap2},~\ref{wrap2z},~\ref{wrap3z},~\ref{wrap4z}. The
 recognizable curve of decrease
-of the discrepancy is still clearly recognizable in the graph~\ref{wrap2}, 
-made for points ranged between 10 and 600. We then present the result 
-in the --- now classic --- window 80 points - 180 points ---. 
+of the discrepancy is still clearly recognizable in the graph~\ref{wrap2},
+made for points ranged between 10 and 600. We then present the result
+in the --- now classic --- window 80 points - 180 points ---.
 For all dimensions, the superiority of non-trivial algorithms --- simulated
-annealing and genetic search --- is clear over fully random search. 
-Both curves for these heuristics are way below the error band of random 
+annealing and genetic search --- is clear over fully random search.
+Both curves for these heuristics are way below the error band of random
 search. As a result \emph{worse average results of non trivial heuristics are
 better than best average results when sampling points at random}.
 In dimension 2~\ref{wrap2z}, the best results are given by the simulated annealing,
 whereas in dimension 3 and 4~\ref{wrap3z},~\ref{wrap4z}, best results are
 given by genetic search. It is also noticeable that in that range
-of points the error rates are roughly the same for all heuristics: 
+of points the error rates are roughly the same for all heuristics:
 \emph{for 1000 iteration, the stability of the results is globally the
 same for each heuristic}.
 
@@ -413,10 +487,10 @@ same for each heuristic}.
 
 \section{Conclusion}
 
-\section*{Acknoledgments}
-We would like to thank Magnus Wahlstrom from the Max Planck Institute for Informatics 
-for providing an implementation of the DEM algorithm [DEM96]. 
-We would also like to thank Christoff Durr and Carola Doerr 
+\section*{Acknowledgments}
+We would like to thank Magnus Wahlstrom from the Max Planck Institute for Informatics
+for providing an implementation of the DEM algorithm [DEM96].
+We would also like to thank Christoff Durr and Carola Doerr
 for several very helpful talks on the topic of this work.
 Both Thomas Espitau and Olivier Marty  supported by the French Ministry for
 Research and Higher Education, trough the Ecole Normale Supérieure.