Statistics 581-2-3

Syllabus

581

1. The scientific method (2 lectures)
The role of statistical analysis in science. Model building, prediction, scientific induction, decission making.

2. Convergence of random vectors (6)
Review of basic convergence concepts. Skorokhod construction. Multivariate delta-method. Kolmogorov-Smirnov theorem. Convergence of sample quantiles.

3. Comparison of estimators (3)
Asymptotic relative efficiency. Sufficiency. Lower bounds on the variance of estimators. First order efficiency.

3. Methods of maximum likelihood (14)
Consistency of maximum likelihood estimators. Asymptotic normality of likelihood equation estimators. The method of scoring. The EM- and IP-algorithms and their properties. Nonparametric maximum likelihood. Conditional and partial likelihood. Monotone likelihood ratio and UMP tests. LMP tests. The asymptotic distribution of likelihood ratio, score, and Wald tests (including power at contiguous alternatives).

4. Improved asymptotic distributions (4)
Edgeworth expansions. Saddlepoint approximations.

582

5. The likelihood principle (2)
Birnbaum's theorem. Variants and consequences of the likelihood principle.

6. Bayes methods (8)
Conjugate priors. Jeffreys priors. Consistency of Bayes estimates. Asymptotic normality of posterior distributions. Computation and approximation strategies. Empirical Bayes methods.

7. Statistical functionals (4)
Differentiation of functionals. Analysis of remainder terms. Asymptotic properties.

8. Finite sample estimates of variability (5)
Jackknife. Bootstrap. Other resampling methods.

9. Robustness (9)
Qualitative robustness and resistance. M-estimates. Influence curve and breakdown point. Minimum distance estimates. Robustness and Bayes methods.

583

10. Estimation theory for dependent data (6)
Likelihood theory for discrete time processes. Consistency and asymptotic normality. Likelihood tests for Markov chains. Robustness against dependence of iid-based estimators.

The following topics are possible options. Different instructors will have different points of view. About three of these topics can be covered.

12. Optimal estimation (6)
Minimum variance estimation. Rao-Blackwell theorem. Lehmann-Scheffe theorem. U-estimates. Invariance and equivariance. Parameters of location and scale. Pitman estimates.

13. Nonparametric methods (10)
Invariance. Rank tests based on scores. R-estimates. L-estimates. Goodness of fit tests. Density estimation. Graphical methods.

14. Bayesian model comparison (4)
Bayes factors. Inference in the presence of many competing non-nested models. Comparison between Bayes factors and P-values.

15. Ancillarity in exponential families (6)
Exponential family theory. B-ancillarity, M-ancillarity. Plausibility inference. Conditional inference.

16. Testing theory (8)
Unbiased and similar tests. Invariance, maximal invariants, UMPI tests. Local tests in the presence of nuisance parameters. Separate families.

17. Decision theory (8)
Admissibility. Optimality of Bayes procedures. Minimax theory. Shrinkage estimators.

Note

The faculty of Statistics and Biostatistics have decided to reduce the content of this course by one third. There will be a consequent reduction in the number of topics covered. No revised syllabus has been agreed upon.

Recommended text books

There is no text available that covers the material we are interested in. The following books each cover some aspect of the course.

Cox, D.R., and D.V. Hinkley: Theoretical statistics. London: Chapman and Hall, 1974.
Hampel, F.R., P.J. Rousseeuw, E.M. Ronchetti and W. A. Stahel: Robust statistics: the approach based on influence functions. New York: Wiley, 1986.
Lehmann, E.L.: Theory of point estimation. New York: Wiley, 1983.
Serfling, R. J.: Approximation theorems of mathematical statistics. New York: Wiley, 1980.
Barnett, V.:Comparative Statistical Inference.