Parcel Fabric Performance Issues

3593
14
04-21-2017 07:27 AM
GavinMcDade
New Contributor III

We’re currently in the development and testing phase of implementing the Parcel Fabric for our county, and have encountered EXTREME performance issues that no one seems to have an answer for.

 

As a brief background:  We’ve already (months ago) completed a lengthy cleanup process of our parcels – simplifying the line work by removing excessive vertices and planarizing the majority of them to 2-point lines (except for a minority of parcel boundaries following natural features like hydro, etc., which were left as linestrings). This cleanup removed in excess of 10 million extraneous vertices. Our cleaned up parcel and subdivision data was then further processed and loaded into the ESRI-provided staging schema (FGDB) where any and all topology errors/issues were corrected, then ultimately loaded into a new Fabric (FGDB). This source Fabric FGDB was subsequently loaded into our Development SDE (ArcSDE 10.3.1 | Oracle 11.2.0.3) database via the Copy Parcel Fabric tool.

 

Our first performance issue was encountered when attempting to register the Fabric as versioned. Because the versioning process runs a series of “analyzes” on all of the Fabric objects/tables prior to the registration itself, the process appears to hang – when in reality, it simply takes a very long time to complete. In our case, “a very long time” means that both the “Parcels” and “Lines” feature classes took 4-5 hrs. each to complete. A cursory look at the tables (and their various created indexes) revealed that the “Lines” feature class alone has 2.9 million lines/records. While somewhat alarming to me, no one at ESRI has suggested that this is beyond the pale, per se.

 

After finally getting it registered, I began some rudimentary edit testing, mainly focusing on simple Parcel Fabric Workflows like a parcel split. Immediately, at the point in the Workflow where the Construction tool is activated, the cursor/crosshair exhibits a mind-numbing latency, turning to a pointer w/ an hourglass when moved through the map window (as if an intense operation was underway). If left alone (stopping mouse movement), the crosshair will return after 10-15 secs. … but, will turn back to an hourglass the instant you attempt to move it again. If you move the unresponsive pointer within snapping distance of a construction line feature and wait, it will behave as if snapped once the crosshair returns. From this point, if you SLOWLY move the cursor along the already-snapped feature, it will remain as an active crosshair. If, however, you move to quickly, or move beyond the sticky tolerance of the snap environment (10 pixels), the crosshair will become an hourglass once again. If patient enough to actually begin constructing a line feature during this time, the same behavior will continue to be exhibited the entire time you add vertices and snap to corresponding line features. When finished, you can Build the constructed features as you would normally do.

 

After additional testing, I discovered that if I turn on the ‘Classic Snapping’ environment which allows you to control which layers and feature types in your TOC are available for snapping (as opposed to the newer default snapping environment in which all layers are snap-enabled all the time), and turn OFF all layers from snapping, the performance issue goes away. At this point, the Fabric still enforces snapping to itself (which is necessary), even when all layers are set not to snap. The moment you toggle on the Lines FC, however, is when performance screeches to a halt.

 

Interestingly, this behavior does NOT occur in a FGDB. This leads me to infer that there is some unique interaction between ArcMap, SDE, and the Oracle database when it comes to the data-intensive Lines FC. Despite ESRI having provided several additional ways of directly (via SQL) re-analyzing the Fabric tables and regenerating spatial indexes, etc., no improvement has been achieved, nor has anyone identified (via SDE Intercept and Oracle trace files) any apparent bottleneck… This is NOT in I/O problem, nor does it appear to be a SQL processing issue, either.

 

I return to my concern over the sheer number of records in the Lines feature class, but have no way of substantiating how/why/if this is a valid concern. ArcMap is very apparently choking on the ability to interact with the Lines data in real time, but there’s no functional explanation I can offer to back this up, beyond my mere observations. Clearly, the FGDB is creating its own brand of spatial indexes and the like on this large feature class, but runs smoothly just the same. In contrast, Oracle should be even more robust, essentially shrugging at a data table with a mere 3 million records – yet, the performance in this case is abysmal. Thus, I keep coming back to something that the application is doing, and not simply the database itself.

 

If anyone has any similar experiences with, or insight into, this issue, we’d appreciate it greatly!

 

Gavin

14 Replies
anna_garrett
Occasional Contributor III

I've had some similar latency problems with editing and found that the feature extent and the spatial index were out of sync upon creation of the parcel fabric. I manually recalculated them from ArcCatalog > Feature Class Properties menu for the Lines feature class. Running tools to automatically recalculate the indexes didn't seem to do anything to fix the problem. Not sure if this will help you at all as I don't know anything about Oracle databases.

edit: my parcel fabric contains ~1.4 million lines, just a consideration.

GavinMcDade
New Contributor III

Thanks, Anna. We did notice a change in our feature extents, but only with the Parcels and Lines feature classes themselves. Because the extents discrepancy only came to light after having begun edit testing, I unfortunately can't verify if the difference in extents occurred upon loading (and enabling the LGIM) in SDE, or if it was introduced by virtue of editing. In any case, we did recalculate the feature extents - but, to no avail. 

Interestingly, we've also been seeing references to some similar behavior by users who claim the only solution is to change the geometry storage type from ST_GEOMETRY to SDEBINARY. This seems ridiculous to me that we'd have to revert to a deprecated storage type as a solution. I haven't yet tried this, but it's the only other "fix" I've seen mentioned. 

0 Kudos
anna_garrett
Occasional Contributor III

I did change the geometry storage type to SDEBINARY after a ton of googling for other solutions. I wish there was a better solution.

0 Kudos
GavinMcDade
New Contributor III

Are you saying that the change of storage type IS what fixed your problems, then? 

0 Kudos
anna_garrett
Occasional Contributor III

Combination of both? I ran into those issues last year and I recall running through both solutions and have been running fine ever since.

GavinMcDade
New Contributor III

Understood. I guess I'll give it a try. If it works, ESRI has some 'splainin' to do. 

Thanks! 

AmirBar-Maor
Esri Regular Contributor

Performance is more than one thing: drawing speed in different scales, editing operations, selection by attributes, selection by location (spatial queries), versioning...

It is also effected by: network, client, server, software version, DBMS type, DBMS version and patches, geometry type, map configuration (scale dependency, labeling, projection on the fly), number of connected users, GDB maintenance operations schedule and more...

In general, the drawing performance of a parcel fabric class should be identical to the drawing of a simple feature class. That means that if you export a parcel class to a simple feature class they should draw in the same speed.

To figure out what is causing the bad performance it is best you contact a technical support analyst.

In general:

  1. Make sure the DBMS software is up to date and matches the ArcGIS version software
  2. If the performance changes through the course of a day, it could indicate inadequate resources (network or hardware)
  3. Follow GDB best practices (compress, version management) - see Land Record Meetup session about it
  4. Make sure you spatial indices are correct
  5. Make sure you don;t have unnecessary relationship classes (old LGIM)
  6. Make sure your map is configured correctly (definition queries, scale dependency....) 
  7. ...

YOU SHOULD NOT COMPROMISE ON HAVING GOOD PERFORMANCE!

Amir

0 Kudos
RaymondCrew
New Contributor II

Was your list cut off or is "7. ..." just a way you are highlighting the closing all caps statement? 

0 Kudos
AmirBar-Maor
Esri Regular Contributor

Hi Raymond,

"7. ..." means that there are many more factors that come to play when we deal with performance and those can be very different from one organization to the other.

The highlighted all caps is for emphasis. We have customers with hundreds of millions features in the same table the get good performance, so if you are not satisfied with it, you should fix it. Many times installing a DBMS on another inferior machine, loading the data and testing the same operations can help benchmark issues. There is no "one fit all" and many time technical support can have recommendation for a specific combination of DBMS release and  Geodatabase release.

Amir

0 Kudos