I am interpolating altitud values from point data to recreate a DEM. I use the ordinary kriging method to predict these values. Moreover I would like to understand the standard error related to the predicted values. To get information on standard errors related to the predicted values, the Geostatistical Analyst Tool in ArcGIS 9.2. provides different information:
- In the cross-validation step, ArcGIS gives us a standard error value (as named on the column on the table of the detailed results of the interpolation process) related to each point data;
- Always in the cross-validation step, ArcGIS computes an average standard prediction error;
- We can also create a prediction standard error map.
So, my questions are simple but the answers should be much more complex...
What is the difference between these different standard errors (punctual standard error, average standard prediction error and prediction standard error mapping)?
Especially, how are they computed in practice and statistically?
Cheers,
Simon