Accuracy Assessment and Confusion Matrix workflows exist for comparing a classified raster to a reference raster, but the same process does not exist for point cloud files (at least I can't find one). Being able to compare two point cloud datasets to each other would allow us to check on a model's performance after it has been trained, which would allow for us to ensure the accuracy assessment that gets generated during training is still true. This would also allow us to gauge the accuracy of out of the box auto classification methods, such as classify ground. Ensuring these out of the box classification methods are producing results within acceptable tolerances to a by hand approach is a necessary step before we can confidently switch to these processes in our workflows.
Relevant links:
Assess point cloud training results—ArcGIS Pro | Documentation -- Accuracy results reported during the training of a model, this same accuracy result would be the desired output of a tool, that does not require training the mode.
Accuracy Assessment—ArcGIS Pro | Documentation
Create Accuracy Assessment Points (Spatial Analyst)—ArcGIS Pro | Documentation
Compute Confusion Matrix (Spatial Analyst)—ArcGIS Pro | Documentation - These 3 reference tools that work for raster datasets, but do not work for point cloud datasets
Hello and thank you for sharing this suggestion. The Evaluate Point Cloud Classification Model tool provides a way to compare multiple classification models at once and it provides a confusion matrix for each model. However, this tool requires deep learning models as input and does not allow for comparison between two or more stand-alone point cloud datasets. We intend to introduce a tool that does so, but until then, I hope the Evaluate tool is useful for your needs.
Regards,
Khalid
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.