Browse Source

typos and reformulations

Olivier Marty 8 years ago
parent
commit
0845ea16e0
1 changed files with 22 additions and 15 deletions
  1. 22 15
      main.tex

+ 22 - 15
main.tex

@@ -79,7 +79,7 @@ extremal values are also given in order to construct error bands graphs.
 
 A flowchart of the conduct of one experiment is described in the 
 flowchart~\ref{insight_flow}. The number of iteration of the heuristic is 
-I and the number of full restart is N. Th function Heuristic() correspond to
+I and the number of full restart is N. The function Heuristic() correspond to
 a single step of the chosen heuristic. We now present an in-depth view of
 the implemented heuristics.
 
@@ -91,7 +91,7 @@ the implemented heuristics.
 \end{mdframed}
 \end{figure}
 
-Graph are presented not with the usual "mustache boxes" to show the 
+Graph are presented not with the usual box plot to show the 
 error bounds, but in a more graphical way with error bands. The graph
 of the mean result is included inside a band of the same color which
 represents the incertitude with regards to the values obtained.
@@ -99,11 +99,12 @@ represents the incertitude with regards to the values obtained.
 \section{Heuristics developed}
 
 \subsection{Fully random search (Test case)}
- The first heuristic implemented is the random search. We generates
- random sets of Halton points and select the best set with regard to its
- discrepancy iteratively. The process is wrapped up in the 
- flowchart~\ref{random_flow}. In order to generate at each step a random 
- permutation, we transform it directly from the previous one.
+ The first heuristic implemented is the random search. We generate
+ random permutations, compute the corresponging sets of Halton points
+ and select the best set with regard to its discrepancy.
+ The process is wrapped up in the 
+ flowchart~\ref{random_flow}. In order to generate at each step random 
+ permutations, we transform them directly from the previous ones.
   More precisely the permutation is a singleton object which have method 
   random, built on the Knuth Fisher Yates shuffle. This algorithm allows
   us to generate an uniformly chosen  permutation at each step. We recall 
@@ -176,7 +177,7 @@ to shrink with a bigger number of iterations (around $5\%$ for 1600 iterations).
 This shrinkage is a direct consequence of well known concentrations bounds
 (Chernoff and Asuma-Hoeffding).
 The average results are quite stable, they decrease progressively with 
-the growing number of iterations, but seems to get to a limits after 1000 
+the growing number of iterations, but seem to get to a limits after 1000 
 iterations. This value acts as a threshold for the interesting number of iterations.
 As such interesting results can be conducted with \emph{only} 1000 iterations, 
 without altering too much the quality of the set with regards to its
@@ -193,6 +194,7 @@ discrepancy and this heuristic.
 \label{rand_iter3}
 \end{figure}
 
+# TODO sa n'est pas evolutionnaire
 \subsection{Evolutionary heuristic: Simulated annealing and local search}
 The second heuristic implemented is a randomized local search with 
 simulated annealing. This heuristic is inspired by the physical 
@@ -200,15 +202,20 @@ process of annealing in metallurgy.
 Simulated annealing interprets the physical slow cooling as a 
 slow decrease in the probability of accepting worse solutions as it 
 explores the solution space. 
-More precisely the neighbours are here the permutations which can be obtained
-by application of exactly one transposition of the current permutation.
+More precisely a state is a $d$-tuple of permutations, one for each dimension,
+and the neighbourhood is the set of $d$-tuple of permutations which can be obtained
+by application of exactly one transposition of one of the permutation of 
+the current state.
 The selection phase is dependant on the current temperature:
-after applying a random transposition on the current permutation, either
+after applying a random transposition on one of the current permutations, either
 the discrepancy of the corresponding Halton set is decreased and the 
 evolution is kept, either it does not but is still kept with 
 a probability $e^{\frac{\delta}{T}}$ where $\delta$ is the difference
 between the old and new discrepancy, and $T$ the current temperature.
-The all algorithm is described in the flowchart~\ref{flow_rec}.
+If the de discrepancy has decreased, the temperature $T$ is multiplied
+by a factor $\lambda$ (fixed to $0.992$ in all our simultations), hence
+is decreased.
+The whole algorithm is described in the flowchart~\ref{flow_rec}.
 
 \begin{figure}
  \begin{mdframed}
@@ -221,7 +228,7 @@ The all algorithm is described in the flowchart~\ref{flow_rec}.
 \subsubsection{Dependence on the temperature}
 First experiments were made to select the best initial temperature.
 Results are compiled in graphs~\ref{temp_2},~\ref{temp3},\ref{temp3_z}.
-Graphs~\ref{temp_2},~\ref{temp3} represents the results obtained respectively
+Graphs~\ref{temp_2},~\ref{temp3} represent the results obtained respectively
 in dimension 2 and 3 between 10 and 500 points. The curve obtained is 
 characteristic of the average evolution of the discrepancy optimization 
 algorithms for Halton points sets: a very fast decrease for low number of 
@@ -258,7 +265,7 @@ of the algorithm with regards to the number of iterations. We present here
 the result in dimension 3 in the graph~\ref{iter_sa}. Once again we
 restricted the window between 80 and 180 points were curves are split.
 An interesting phenomena can be observed: the error rates are somehow 
-invariant w.r.t.\ the number of iteration and once again the 1000 iterations
+invariant w.r.t.\ the number of iterations and once again the 1000 iterations
 threshold seems to appear --- point 145 is a light split between iteration 
 1600 and the others, but excepted for that point, getting more than 1000
 iterations tends be be a waste of time. The error rate is for 80 points the
@@ -305,7 +312,7 @@ rates for fully random search with 400 iterations.
 \caption{Dependence on iterations number: D=3}
 \end{figure}
 
-As prev we investigated the stability
+As previously we investigated the stability
 of the algorithm with regards to the number of iterations. We present here
 the result in dimension 3 in the graph~\ref{iter_sa}. Once again we
 restricted the window between 80 and 180 points were curves are split.