Estimation: Uncertainty estimates needed. Both for predicted values and for parameter estimates. Bayesian and frequentist ***************** Goals of comparison/evaluation: 1. Predictive ability —Quick and dirty: RMSE -Accounting for uncertainty (CRPS, log-score, PPP) 2. Describing the second-order properties, understanding the process —How do you summarize the covariance matrix and compare (determinant? EOFs) —Qualitative (Reinhard’s pictures, visualization team) 3. Model comparison (AIC of model fit, but not of pred ability, cross validation methods) 4. Model assessment: Mikyoung, testing (existing papers) *************** What is needed for model comparison — hard to run our models, let alone other models — store the MCMC output? — is it possible to have a common standard (probably only at Q&D level?)