The page here represents the views of Helge Rørdam Olesen, NERI, who has been working with model evaluation since the early nineties.
The issue on difficulties in model evaluation can be said to represent one of my 'pet topics'.
The table below summarises some difficulties in model evaluation. However, the table is also an attempt to be constructive, pointing to potential solutions to the problems - although the solutions lead to derived problems, as evidenced by the last column of the table.
I have written numerous papers elaborating on slightly differing versions of the table - see references given at http://www.harmo.org/kit/hro_papers.asp.
You may comment on the table, preferably by using the Tab 'discussion' or by adding links.
|The appropriate evaluation method depends on the context of the application and the data sets available||An array of various evaluation methods must be developed.||What weight should be ascribed to various performance measures?|
|Input data sets are limited. They reflect only few of the possible scenarios.||a) Extrapolate model behaviour outside of validation domain||a) Does the model give the right result for the right reason? |
We must understand model behaviour!
|b) Use many data sets||b) Hard work! |
|Processing of input data for model evaluation is far from trivial.||Take care!
Use quality indicators.
|The luxury of independent data sets can rarely be afforded.||Use many data sets.||Hard work! |
|There are inherent uncertainties.||Use Venkatram's conceptual framework.||Ensembles are difficult to establish.|