By D. R. Cox
Publisher: Cambridge University Press
Print Publication Year: 2006
Online Publication Date:March 2011
Chapter DOI: http://dx.doi.org/10.1017/CBO9780511813559.012
Much of this book has involved an interplay between broadly frequentist discussion and a Bayesian approach, the latter usually involving a wider notion of the idea of probability. In many, but by no means all, situations numerically similar answers can be obtained from the two routes. Both approaches occur so widely in the current literature that it is important to appreciate the relation between them and for that reason the book has attempted a relatively dispassionate assessment.
This appendix is, by contrast, a more personal statement. Whatever the approach to formal inference, formalization of the research question as being concerned with aspects of a specified kind of probability model is clearly of critical importance. It translates a subject-matter question into a formal statistical question and that translation must be reasonably faithful and, as far as is feasible, the consistency of the model with the data must be checked. How this translation from subject-matter problem to statistical model is done is often the most critical part of an analysis. Furthermore, all formal representations of the process of analysis and its justification are at best idealized models of an often complex chain of argument.
Frequentist analyses are based on a simple and powerful unifying principle. The implications of data are examined using measuring techniques such as confidence limits and significance tests calibrated, as are other measuring instruments, indirectly by the hypothetical consequences of their repeated use.