Our organization performs database maintenance using a python script against our enterprise geodatabase on a daily basis (well at least during workdays). While the script we use has been in place for a couple of years, a colleague and I decided to re-look at the script to see if we could improve any performance or make any adjustments to make it better. In a nutshell our script works like this:
Block connection to database > disconnect users > reconcile versions > analyze datasets > compress database > rebuild indexes > accept connections to database
The issue we are having at the moment is the proper usage of analyze datasets tool; admittedly the correct usage of this tool has always baffled me. But in any event, right now the we way have it set up is that runs against the following: tables, feature classes that reside at the root level of the database, feature datasets, and all feature classes that reside within each feature dataset.
My main question is this: running the analyze datasets against the feature datasets and the feature classes that reside in the feature datasets separately, is that overkill?
For instance, say we simply run the tool against the feature datasets ... does that also analyze all the feature classes that live within it? Or is the way we have it set up the correct way? The way we have it set up is definitely the slower way. So if in fact what we are doing is overkill, we would like to modify appropriately in the interest of time and cut down on any superfluous tasks.