Olivier Marty 8 жил өмнө
parent
commit
b497bd1abc
1 өөрчлөгдсөн 20 нэмэгдсэн , 20 устгасан
  1. 20 20
      main.tex

+ 20 - 20
main.tex

@@ -223,16 +223,16 @@ Simulated annealing interprets the physical slow cooling as a
 slow decrease in the probability of accepting worse solutions as it
 slow decrease in the probability of accepting worse solutions as it
 explores the solution space.
 explores the solution space.
 More precisely a state is a $d$-tuple of permutations, one for each dimension,
 More precisely a state is a $d$-tuple of permutations, one for each dimension,
-and the neighbourhood is the set of $d$-tuple of permutations which can be obtained
-by application of exactly one transposition of one of the permutation of
+and the neighborhood is the set of $d$-tuple of permutations which can be obtained
+by application of exactly one transposition of one of the permutations of
 the current state.
 the current state.
 The selection phase is dependant on the current temperature:
 The selection phase is dependant on the current temperature:
-after applying a random transposition on one of the current permutations, either
+after selecting randomly a state in the neighborhood, either
 the discrepancy of the corresponding Halton set is decreased and the
 the discrepancy of the corresponding Halton set is decreased and the
 evolution is kept, either it does not but is still kept with
 evolution is kept, either it does not but is still kept with
 a probability $e^{\frac{\delta}{T}}$ where $\delta$ is the difference
 a probability $e^{\frac{\delta}{T}}$ where $\delta$ is the difference
 between the old and new discrepancy, and $T$ the current temperature.
 between the old and new discrepancy, and $T$ the current temperature.
-If the de discrepancy has decreased, the temperature $T$ is multiplied
+If the discrepancy has decreased, the temperature $T$ is multiplied
 by a factor $\lambda$ (fixed to $0.992$ in all our simulations), hence
 by a factor $\lambda$ (fixed to $0.992$ in all our simulations), hence
 is decreased.
 is decreased.
 The whole algorithm is described in the flowchart~\ref{flow_rec}.
 The whole algorithm is described in the flowchart~\ref{flow_rec}.
@@ -247,8 +247,8 @@ The whole algorithm is described in the flowchart~\ref{flow_rec}.
 
 
 \subsubsection{Dependence on the temperature}
 \subsubsection{Dependence on the temperature}
 First experiments were made to select the best initial temperature.
 First experiments were made to select the best initial temperature.
-Results are compiled in graphs~\ref{temp_2},~\ref{temp3},\ref{temp3_z}.
-Graphs~\ref{temp_2},~\ref{temp3} represent the results obtained respectively
+Results are compiled in graphs~\ref{temp_2},~\ref{temp3}, and~\ref{temp3_z}.
+Graphs~\ref{temp_2} and~\ref{temp3} represent the results obtained respectively
 in dimension 2 and 3 between 10 and 500 points. The curve obtained is
 in dimension 2 and 3 between 10 and 500 points. The curve obtained is
 characteristic of the average evolution of the discrepancy optimization
 characteristic of the average evolution of the discrepancy optimization
 algorithms for Halton points sets: a very fast decrease for low number of
 algorithms for Halton points sets: a very fast decrease for low number of
@@ -281,7 +281,7 @@ temperature is, the best the results are, with a threshold at $10^{-3}$.
 \subsubsection{Stability with regards to the number of iterations}
 \subsubsection{Stability with regards to the number of iterations}
 
 
 As for the fully random search heuristic we investigated the stability
 As for the fully random search heuristic we investigated the stability
-of the algorithm with regards to the number of iterations. We present here
+of the algorithm with regards to the number of iterations. We present 
 the result in dimension 3 in the graph~\ref{iter_sa}. Once again we
 the result in dimension 3 in the graph~\ref{iter_sa}. Once again we
 restricted the window between 80 and 180 points were curves are split.
 restricted the window between 80 and 180 points were curves are split.
 An interesting phenomena can be observed: the error rates are somehow
 An interesting phenomena can be observed: the error rates are somehow
@@ -307,11 +307,11 @@ from which $\lambda$ new genes are derived. A gene is the set of parameters
 we are optimizing, i.e. the permutations.
 we are optimizing, i.e. the permutations.
 Each one is derived either from one gene applying a mutation
 Each one is derived either from one gene applying a mutation
 (here a transposition of one of the permutations), or from two
 (here a transposition of one of the permutations), or from two
-genes applying a crossover : a blending of both genes (the
+genes applying a crossover: a blending of both genes (the
 algorithm is described in details further). The probability of
 algorithm is described in details further). The probability of
-making a mutation is $c$, the third parameter of the algorithm,
+making a crossover rather than a mutation is $c$, the third parameter of the algorithm,
 among $\mu$ and $\lambda$. After that, only the $\mu$ best
 among $\mu$ and $\lambda$. After that, only the $\mu$ best
-genes are kept, according to their fitness, and the process
+genes are kept, according to their fitness, and the evolutionary process
 can start again.
 can start again.
 
 
 Because making variations over $\mu$ or $\lambda$ does not change fundamentally
 Because making variations over $\mu$ or $\lambda$ does not change fundamentally
@@ -323,10 +323,10 @@ each iteration and the size of the family.
 
 
 \subsubsection{Crossover algorithm}
 \subsubsection{Crossover algorithm}
 
 
-We designed a crossover for permutations. The idea is simple: given two
-permutations $A$ and $B$ of $\{1..n\}$, it constructs a new permutations
+We designed an ad-hoc crossover for permutations. The idea is simple: given two
+permutations $A$ and $B$ of $\{1..n\}$, it constructs a new permutation
 $C$ value after value, in a random order (we use our class permutation
 $C$ value after value, in a random order (we use our class permutation
-for it). For each   $i$, we take either $A_i$ or $B_i$. If exactly
+for this). For each index  $i$, we take either $A_i$ or $B_i$. If exactly
 one of those values is available (understand it was not already chosen)
 one of those values is available (understand it was not already chosen)
 we choose it. If both are available, we choose randomly and we remember 
 we choose it. If both are available, we choose randomly and we remember 
 the
 the
@@ -398,7 +398,7 @@ The graph~\ref{res_gen2z} is a zoom  of~\ref{res_gen2} in this window, and
 graphs~\ref{res_gen3} and~\ref{res_gen4} are focused directly into it too.
 graphs~\ref{res_gen3} and~\ref{res_gen4} are focused directly into it too.
 We remark that in dimension 2, the results are better for $c$ close to $0.5$
 We remark that in dimension 2, the results are better for $c$ close to $0.5$
 whereas for dimension 3 and 4 the best results are obtained for $c$ closer to
 whereas for dimension 3 and 4 the best results are obtained for $c$ closer to
-$0.1$.
+$0.1$, that is a low probability of making a crossover.
 
 
 
 
 \begin{figure}
 \begin{figure}
@@ -444,13 +444,13 @@ like we get before.
 \section{Results and conclusions}
 \section{Results and conclusions}
 Eventually we made extensive experiments to compare the three previously
 Eventually we made extensive experiments to compare the three previously
 presented heuristics. The parameters chosen for the heuristics have been
 presented heuristics. The parameters chosen for the heuristics have been
-chosen using the experiments conducted in the previous sections
+guessed using the experiments conducted in the previous sections.
 Results are compiled in the last
 Results are compiled in the last
-figures~\ref{wrap2},~\ref{wrap2z},~\ref{wrap3z},~\ref{wrap4z}. The
+figures~\ref{wrap2},~\ref{wrap2z},~\ref{wrap3z}, and~\ref{wrap4z}. The
 recognizable curve of decrease
 recognizable curve of decrease
 of the discrepancy is still clearly recognizable in the graph~\ref{wrap2},
 of the discrepancy is still clearly recognizable in the graph~\ref{wrap2},
 made for points ranged between 10 and 600. We then present the result
 made for points ranged between 10 and 600. We then present the result
-in the --- now classic --- window 80 points - 180 points ---.
+in the --- now classic --- window 80 points - 180 points.
 For all dimensions, the superiority of non-trivial algorithms --- simulated
 For all dimensions, the superiority of non-trivial algorithms --- simulated
 annealing and genetic search --- is clear over fully random search.
 annealing and genetic search --- is clear over fully random search.
 Both curves for these heuristics are way below the error band of random
 Both curves for these heuristics are way below the error band of random
@@ -491,10 +491,10 @@ same for each heuristic}.
 \section*{Acknowledgments}
 \section*{Acknowledgments}
 We would like to thank Magnus Wahlstrom from the Max Planck Institute for Informatics
 We would like to thank Magnus Wahlstrom from the Max Planck Institute for Informatics
 for providing an implementation of the DEM algorithm.
 for providing an implementation of the DEM algorithm.
-We would also like to thank Christoff Durr and Carola Doerr
+We would also like to thank Christoff D\"urr and Carola Doerr
 for several very helpful talks on the topic of this work.
 for several very helpful talks on the topic of this work.
-Both Thomas Espitau and Olivier Marty  supported by the French Ministry for
-Research and Higher Education, trough the Ecole Normale Supérieure.
+Both Thomas Espitau and Olivier Marty are supported by the French Ministry for
+Research and Higher Education, trough the \'Ecole Normale Supérieure.
   \bibliographystyle{alpha}
   \bibliographystyle{alpha}
   \bibliography{bi}
   \bibliography{bi}
 \end{document}
 \end{document}