A cada dia que passa fica evidente a fragilidade das pesquisas empíricas em ciências sociais. Há uma falta de rigor no uso de ferramentas estatísiticas (principalmente, econometria) pelos pesquisadores devido ao desconhecimento das propriedades dos dados econômicos-financeiros. Taleb vai direto ao ponto:
"Econometrics is dominated by standard deviations, and more generally by measures in the L2 norm,1 based on squares of numbers (SD is the square root of the average of the sum of the squared deviations), all of which are grounded in a class that revolves around the Gaussian family: the Gaussian and related distributions that converge to it under a reasonable amount of summation, such as the binomial, Poisson, chi-square, and exponential distributions. The problem is that the Gaussian distribution is of limited applicability outside of textbook examples — it is the type of randomness that prevails in game setups such as coin tosses, or possibly in quantum mechanics. Using it leads to the underestimation of fat tails and the role of extreme events, and to predictions that underestimate their own errors."
Where the problem is not expert underestimation of randomness, but more: the tools themselves used in regression analyses and similar methods underestimate fat tails, hence the randomness in the data. We should avoid imparting psychological explanations to errors in the use of statistical methods.