Most of human history (actually pre-history) is uneventful. In its current form, our species has been around for something like 100,000 years. With only occasional and often transitory exceptions, we didn't learn or create anything terribly impressive for 99 percent of that time. But in just the last few hundred years, human beings have accomplished a remarkable amount. Scientific and technological progress have accelerated and never stopped since modern scientific method was formulated in the first half of the 17th century, swiftly followed by the rise of the modern physical sciences.

This recent rush of scientific and technological advance has been sustained for longer than any individual human lifetime, perhaps making it easy to forget that it is not inevitable. In fact, science has barely begun to find traction in the humanities and social sciences-including management theory, the decision sciences, and forecasting. We should be encouraged by the recent surge of activity in “evidence-based management.” At the same time, we should remain alert for practices that, while perhaps widely accepted, may impede the advance of scientific knowledge. I have long devoted myself to critically evaluating widely accepted theories and practices in the area of forecasting as applied to marketing and business in general-including the use of SWOT analysis, methods of choosing to advertise agencies, write reviews of different services onine, for example essay writing services, use of portfolio planning methods for decision-making, and reliance on game theory and focus groups. To check an example of SWOT analysis, check rating here. In this article, I assess tests of statistical significance and conclude that these tests harm scientific progress.

Although popular in leading economics journals, psychology, and other areas, statistical significance is rarely used in the physical sciences. Schmidt and Hunter, after extensive study, concluded that “Statistical significance testing retards the growth of scientific knowledge; it never makes a scientific contribution.” There are various problems that arise with tests of statistical significance.

These problems include:

the publication of faulty interpretations of statistical significance;

misinterpretation of statistical significance by journal reviewers;

and misinterpretation by readers even after training (such as in MBA programs).

The development of science and cumulative knowledge is harmed because of a resulting bias against publishing papers that fail to reject the null hypothesis. Statistical significance testing distracts attention from key issues by leading people to think incorrectly that they have completed the analysis.

How are researchers to report findings without use of tests of statistical significance? Better alternatives are available.

To assess importance, I recommend using effect sizes or practical significance. To assess confidence, use prediction intervals; to assess replicability, use replications and extensions; and to assess generality, use meta-analyses. In urging the discontinuation of the use of statistical significance, my focus is on the development of knowledge about forecasting. I speculate that statistical significance might help in certain areas, though currently there is no evidence for this. Until more evidence is found that supports statistical significance tests, they will be unnecessary even when properly done.