espitau 8 years ago
parent
commit
79e705ae65
2 changed files with 81 additions and 20 deletions
  1. 3 3
      main.tex
  2. 78 17
      main.tex.bak

+ 3 - 3
main.tex

@@ -333,9 +333,9 @@ annealing and genetic search --- is clear over fully random search.
 Both curves for these heuristics are way below the error band of random 
 search. As a result \emph{worse average results of non trivial heuristics are
 better than best average results when sampling points at random}.
-In dimension 2~\ref{wrap2z}, the best results are given by the gentic search,
-wheras in dimension 3 and 4~\ref{wrap3z},~\ref{wrap4z}, best results are
-given by simmulated annealing. It is also noticable that in that range
+In dimension 2~\ref{wrap2z}, the best results are given by the genetic search,
+whereas in dimension 3 and 4~\ref{wrap3z},~\ref{wrap4z}, best results are
+given by simulated annealing. It is also noticeable that in that range
 of points the error rates are roughly the same for all heuristics: 
 \emph{for 1000 iteration, the stability of the results is globally the
 same for each heuristic}.

+ 78 - 17
main.tex.bak

@@ -56,7 +56,8 @@ Experiments were conducted on two machines:
 On these machines, some basic profiling has make clear that 
 the main bottleneck of the computations is hiding in the \emph{computation
 of the discrepancy}. The chosen algorithm and implantation of this 
-cost function is the DEM-algorithm of \emph{Magnus Wahlstr\o m}.\medskip
+cost function is the DEM-algorithm~\cite{Dobkin} of 
+\emph{Magnus Wahlstr\o m}~\cite{Magnus}.\medskip
 
 All the experiments has been conducted on dimension 2,3,4 
 --- with a fixed Halton basis 7, 13, 29, 3 ---. Some minor tests have
@@ -90,7 +91,7 @@ the implemented heuristics.
 \end{mdframed}
 \end{figure}
 
-Graph are presentd not with the usual "mustache boxes" to show the 
+Graph are presented not with the usual "mustache boxes" to show the 
 error bounds, but in a more graphical way with error bands. The graph
 of the mean result is included inside a band of the same color which
 represents the incertitude with regards to the values obtained.
@@ -121,7 +122,8 @@ represents the incertitude with regards to the values obtained.
 The Fisher–Yates shuffle is an algorithm for generating a random permutation 
 of a finite sets. The Fisher–Yates shuffle is unbiased, so that every 
 permutation is equally likely. We present here the Durstenfeld variant of 
-the algorithm, presented by Knuth in \emph{The Art of Computer programming}.
+the algorithm, presented by Knuth in \emph{The Art of Computer programming}
+vol. 2~\cite{Knuth}.
 The algorithm's time complexity is here $O(n)$, compared to $O(n^2)$ of 
 the naive implementation.
 
@@ -163,7 +165,7 @@ We first want to analyze the dependence of the results on the number of
 iterations of the heuristic, in order to discuss its stability. 
 The results are compiled in the figures~\ref{rand_iter2},~\ref{rand_iter3},
 restricted to a number of points between 80 and 180.
-We emphazies on the fact the lots of datas appears on the graphs, 
+We emphasize on the fact the lots of datas appears on the graphs, 
 and the error bands representation make them a bit messy. These graphs
 were made for extensive internal experiments and parameters researches.
 The final wrap up graphs are much more lighter and only presents the best 
@@ -192,8 +194,8 @@ discrepancy and this heuristic.
 \end{figure}
 
 \subsection{Evolutionary heuristic: Simulated annealing and local search}
-The second heuristic implemented is a randomiezd local search with 
-simmulated annealing. This heuristic is inspired by the physical 
+The second heuristic implemented is a randomized local search with 
+simulated annealing. This heuristic is inspired by the physical 
 process of annealing in metallurgy.
 Simulated annealing interprets the physical slow cooling as a 
 slow decrease in the probability of accepting worse solutions as it 
@@ -201,9 +203,9 @@ explores the solution space.
 More precisely the neighbours are here the permutations which can be obtained
 by application of exactly one transposition of the current permutation.
 The selection phase is dependant on the current temperature:
-after applaying a random transposition on the current permutation, either
-the discrepency of the corresponding Halton set is decreased and the 
-evolution is keeped, either it does not but is still keeped with 
+after applying a random transposition on the current permutation, either
+the discrepancy of the corresponding Halton set is decreased and the 
+evolution is kept, either it does not but is still kept with 
 a probability $e^{\frac{\delta}{T}}$ where $\delta$ is the difference
 between the old and new discrepancy, and $T$ the current temperature.
 The all algorithm is described in the flowchart~\ref{flow_rec}.
@@ -217,14 +219,15 @@ The all algorithm is described in the flowchart~\ref{flow_rec}.
 \end{figure}
 
 \subsubsection{Dependence on the temperature}
-First experiements were made to select the best initial temperature.
+First experiments were made to select the best initial temperature.
 Results are compiled in graphs~\ref{temp_2},~\ref{temp3},\ref{temp3_z}.
 Graphs~\ref{temp_2},~\ref{temp3} represents the results obtained respectively
 in dimension 2 and 3 between 10 and 500 points. The curve obtained is 
-characteristic of the average evolution of the discrepancy optimisation 
+characteristic of the average evolution of the discrepancy optimization 
 algorithms for Halton points sets: a very fast decrease for low number of 
-points --- roughly up to 80 points --- and then a very slow one after.
-The most intersting part of these results are concentred between 80 and 160
+points --- roughly up to 80 points --- and then a very slow one 
+after~\cite{Doerr}.
+The most interesting part of these results are concentrated between 80 and 160
 points were the different curves splits. The graph~\ref{temp3_z} is a zoom 
 of~\ref{temp3} in this window. We remark on that graph that the lower the 
 temperature is, the best the results are.
@@ -250,21 +253,21 @@ temperature is, the best the results are.
 
 \subsubsection{Stability with regards to the number of iterations}
 
-As for the fully random search heursitic we invatigated the stability
+As for the fully random search heuristic we investigated the stability
 of the algorithm with regards to the number of iterations. We present here
 the result in dimension 3 in the graph~\ref{iter_sa}. Once again we
-resticted the window between 80 and 180 points were curves are splited.
+restricted the window between 80 and 180 points were curves are split.
 An interesting phenomena can be observed: the error rates are somehow 
 invariant w.r.t.\ the number of iteration and once again the 1000 iterations
 threshold seems to appear --- point 145 is a light split between iteration 
-1600 and the others, but excpeted for that point, getting more than 1000
+1600 and the others, but excepted for that point, getting more than 1000
 iterations tends be be a waste of time. The error rate is for 80 points the
 biggest and is about $15\%$ of the value, which is similar to the error
 rates for fully random search with 400 iterations.
 
 \begin{figure}
 \includegraphics[scale=0.3]{Results/sa_iter.png}
-\caption{Dependence on iterations number for simmulated annealing : D=3}
+\caption{Dependence on iterations number for simulated annealing : D=3}
   \label{iter_sa}
 \end{figure}
 
@@ -302,9 +305,67 @@ rates for fully random search with 400 iterations.
 \caption{Dependence on iterations number: D=3}
 \end{figure}
 
+As prev we investigated the stability
+of the algorithm with regards to the number of iterations. We present here
+the result in dimension 3 in the graph~\ref{iter_sa}. Once again we
+restricted the window between 80 and 180 points were curves are split.
+An interesting phenomena can be observed: the error rates are somehow 
+invariant w.r.t.\ the number of iteration and once again the 1000 iterations
+threshold seems to appear --- point 145 is a light split between iteration 
+1600 and the others, but excepted for that point, getting more than 1000
+iterations tends be be a waste of time. The error rate is for 80 points the
+biggest and is about $15\%$ of the value, which is similar to the error
+rates for fully random search with 400 iterations.
+
+
 \section{Results}
+Eventually we made extensive experiments to compare the three previously
+presented heuristics. The parameters chosen for the heuristics have been 
+chosen using the experiments conducted in the previous sections
+Results are compiled in the last 
+figures~\ref{wrap2},~\ref{wrap2z},~\ref{wrap3z},~\ref{wrap4z}. The 
+recognizable curve of decrease
+of the discrepancy is still clearly recognizable in the graph~\ref{wrap2}, 
+made for points ranged between 10 and 600. We then present the result 
+in the --- now classic --- window 80 points - 180 points ---. 
+For all dimensions, the superiority of non-trivial algorithms --- simulated
+annealing and genetic search --- is clear over fully random search. 
+Both curves for these heuristics are way below the error band of random 
+search. As a result \emph{worse average results of non trivial heuristics are
+better than best average results when sampling points at random}.
+In dimension 2~\ref{wrap2z}, the best results are given by the gentic search,
+wheras in dimension 3 and 4~\ref{wrap3z},~\ref{wrap4z}, best results are
+given by simmulated annealing. It is also noticable that in that range
+of points the error rates are roughly the same for all heuristics: 
+\emph{for 1000 iteration, the stability of the results is globally the
+same for each heuristic}.
 
+\begin{figure}
+\includegraphics[scale=0.3]{Results/wrap_2.png}
+\caption{Comparison of all heuristics: D=2}
+\label{wrap2}
+\end{figure}
+
+\begin{figure}
+\includegraphics[scale=0.3]{Results/wrap_2_zoom.png}
+\caption{Comparison of all heuristics (zoom): D=2}
+  \label{wrap2z}
+\end{figure}
+
+\begin{figure}
+\includegraphics[scale=0.3]{Results/wrap_3.png}
+\caption{Comparison of all heuristics: D=3}
+  \label{wrap3z}
+\end{figure}
+
+\begin{figure}
+\includegraphics[scale=0.3]{Results/wrap_4.png}
+\caption{Comparison of all heuristics: D=4}
+  \label{wrap4z}
+\end{figure}
 
 \section{Conclusion}
 
+  \bibliographystyle{alpha}
+  \bibliography{bi}
 \end{document}