main.tex 5.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170
  1. % ------------------------------------------------------------------------
  2. \documentclass{llncs}
  3. \input{prelude}
  4. \begin{document}
  5. \title{Star Discrepancies for generalized Halton points}
  6. % \titlerunning{} % abbreviated title (for running head)
  7. % also used for the TOC unless
  8. % \toctitle is used
  9. \author{Thomas Espitau\inst{1} \and Olivier Marty\inst{1}}
  10. %
  11. % \authorrunning{} % abbreviated author list (for running head)
  12. %
  13. %%%% list of authors for the TOC (use if author list has to be modified)
  14. % \tocauthor{}
  15. %
  16. \institute{
  17. $\mbox{}^1$ ENS Cachan \qquad
  18. }
  19. \maketitle
  20. \makeatletter
  21. \makeatother
  22. \begin{abstract}
  23. \end{abstract}
  24. \section{Introduction}
  25. \section{General architecture of the tool-set}
  26. Big pipeline + pool + blabla\medskip
  27. Un truc bien ca serait d'avoir un truc modulaire (genre les heuristiques
  28. sont des "blocs" et l'outil appelle un bloc quand il veut faire son test
  29. en l'appelant en multicore"
  30. (c'est stupide mais ca fera un beau schéma!)
  31. \section{Heuristics devlopped}
  32. \subsection{Fully random search (Test case)}
  33. The first heuristic implemented is rge random search. We generates
  34. random sets of Halton points and select the best set with regard to its
  35. discrepancy iteratively. The process is wrapped up in the following
  36. flowchart~\ref{rand_flow}. In order to generate at each step a random
  37. permutation, we transform it directly from the previous one.
  38. More precisely the permutation is a singleton object which have method
  39. random, built on the Knuth Fisher Yates shuffle. This algorithm allows
  40. us to generate an uniformly chosen permutation at each step. We recall
  41. this fact and detail the algorithm in the following section.
  42. \begin{figure}
  43. \begin{mdframed}
  44. \label{rand_flow}
  45. \includegraphics[scale=0.4]{flow_rand.pdf}
  46. \caption{Flowchart of the random search}
  47. \end{mdframed}
  48. \end{figure}
  49. \subsubsection{The Knuth-Fisher-Yates shuffle}
  50. The Fisher–Yates shuffle is an algorithm for generating a random permutation
  51. of a finite sets. The Fisher–Yates shuffle is unbiased, so that every
  52. permutation is equally likely. We present here the Durstenfeld variant of
  53. the algorithm, presented by Knuth in \emph{The Art of Computer programming}.
  54. The algorithm's time complexity is here $O(n)$, compared to $O(n^2)$ of
  55. the naive implementation.
  56. \begin{algorithm}[H]
  57. \SetAlgoLined
  58. \SetKwFunction{Rand}{Rand}
  59. \SetKwFunction{Swap}{Swap}
  60. \KwData{A table T[1..n]}
  61. \KwResult{Same table T, shuffled}
  62. \For{$i\leftarrow 1$ \KwTo $n-1$}{
  63. $j \leftarrow$ \Rand{$[1,n-i]$}\;
  64. \Swap{$T[i], T[i+j]$}\;
  65. }
  66. \caption{KFY algorithm}
  67. \end{algorithm}
  68. \begin{lemma}
  69. The resulting permutation of KFY is unbiased.
  70. \end{lemma}
  71. \begin{proof}
  72. Let consider the set $[1,\ldots n]$ as the vertices of a random graph
  73. constructed as the trace of the execution of the algorithm:
  74. an edge $(i,j)$ exists in the graph if and only if the swap of $T[i]$ and
  75. $T[j]$ had been executed. This graph encodes the permutation represented by
  76. $T$. To be able to encode any permutation the considered graph must be
  77. connected --- in order to allow any pairs of points to be swapped ---.
  78. Since by construction every points is reached by an edge, and that there
  79. exists exactly $n-1$ edges, we can conclude directly that any permutation can
  80. be reached by the algorithm. Since the probability of getting a fixed graph
  81. of $n-1$ edges with every edges of degree at least one is $n!^{-1}$, the
  82. algorithm is thus unbiased.
  83. \end{proof}
  84. \subsubsection{Results and stability}
  85. We first want to analyse the dependence of the results on the number of
  86. iterations of the algorithm. To perform such an experiment we made up a
  87. wrapper above the main algorithm which calls the random search on
  88. different number of iterations. To smooth up the results and obtain
  89. more exploitable datas, we also perform an
  90. arithmetic mean of fifteen searches for each expermiement. The flowchart of
  91. the conducts of this experiment is descibed in the figure:
  92. \begin{figure}
  93. \begin{mdframed}
  94. \label{rand_flow}
  95. \includegraphics[scale=0.4]{flow_rand_2.pdf}
  96. \caption{Flowchart of the random search experiement for dependence on iterations}
  97. \end{mdframed}
  98. \end{figure}
  99. The results are compiled in the figure
  100. \begin{figure}
  101. \label{rand_flow}
  102. \includegraphics[scale=0.6]{rand_res.png}
  103. \caption{Dependence on iterations number: D=2}
  104. \end{figure}
  105. The same experiment has been conducted on dimension 4 -- with Halton basis
  106. 7, 13, 29, 3 ---:
  107. \begin{figure}
  108. \label{rand_flow}
  109. \includegraphics[scale=0.6]{rand_search_4.png}
  110. \caption{Dependence on iterations number: D=4}
  111. \end{figure}
  112. The same experiment has been conducted on dimension 6 -- with Halton basis
  113. 7, 13, 29, 3, 11, 31---:
  114. \begin{figure}
  115. \label{rand_flow}
  116. \includegraphics[scale=0.6]{rand_search_4.png}
  117. \caption{Dependence on iterations number D=6}
  118. \end{figure}
  119. Based on the same wrapper, we also experiments the dependence on the number
  120. of points, in dimension 2, 4, 6.
  121. \begin{figure}
  122. \label{rand_flow}
  123. \includegraphics[scale=0.6]{low_discr_rand.png}
  124. \caption{Dependence on iterations number}
  125. \end{figure}
  126. \subsection{Local Search}
  127. Blabla+Pipeline
  128. \subsection{Evolutionary heuristics}
  129. Blabla+Pipeline
  130. \subsection{Genetic heuristics}
  131. Blabla+Pipeline
  132. \section{Results}
  133. \section{Conclusion}
  134. \end{document}