Caching in 10.1 produce inconsistent results

1683
12
07-14-2012 02:51 AM
EyadHammad
New Contributor
hello,

i have published a number of MXD files as map services. when attempting to cache, we have different results at different trials. number of completed tiles keep changing with every new attempt. i am not confident of any result i get. sometimes i get zero tiles consisting 0% and other times i get 400%. and the shock is that sometimes i get cache failure message.

i would assume caching procedure is well defined and outcome is constant given same set of input variables. However, i do keep providing the same parameters yet i get different results.

i also noticed that when i do multiple scales at one time, usually something wrong goes on

when i perform caching at one scale at a time, tool is more stable

but, still results vary

have anyone else experienced this issue  ?

are there any detailed literature explaining what exactly is happening when a cache order is placed ?

is there any verification of QC method to double check our work ?

i noticed there is a fix errors command in the cache status wizard. does anyone know what this operation fix exactly ? and why once we run it and it completes and the job status is updated, why does it not update the number of tiles completed ?
Tags (2)
0 Kudos
12 Replies
by Anonymous User
Not applicable
Hi Eyad,

How many instances are you caching with, and can you describe a little more about your configuration? (Eg. How many machines, CPU, RAM, Directories local or UNC to a SAN.)


Thanks,
Andrew
0 Kudos
EyadHammad
New Contributor
hello there,

thank you for your reply

those were valid points you shared. i moved everything to a new more powerful server and number of completed tiles became stable at different attempts.

caching went on fine though some errors were encountered that i am looking into at the moment.

however, i must say that overall caching experience is not satisfying especially when you dig down in scales for basemaps.

our adopted scale scheme is:

1128
2256
4513
9027
18055
36111
72223
144447
288895
577790
1155581
2311162
4622324
9244648
18489297

issues we are facing :

1- no informative explanation why caching fails. reported error messages could be little bit more specific "cause-wise" then just failed to cache at extent XXX

2- cache status GUI in both ArcCatalog and Server Admin page keeps displaying tile generation is in progress even though cache have already reported itself to have failed in arcCatalog and dispite multiple attempts to cancel cach. i am yet to find a way to get arcgis server understand that the cache processes was terminated/or failed. i would love to have a tool to force kill an indefinitely running caching service. the only way i think is to manually update the status.gdb geodatabase, which is what i am thinking of trying out. 

3- cache import/export operations take considerably alot of time to execute even for smal subsets and even for a whole cache which is equivalent to a copy past operation with physical files.

4- cache update status tools takes considerably alot of time to execute and update cache status gdb. also, you cannot specify which level to update and you have to update all, which is not the case most of the times.

5- still need a mechanism to double check the cache completed. as we are using control shapes to govern what is being cached at different scale levels, system generated "expected number of tiles" is bound to differ from completed tiles as later should be less. however, by how much less or what should the number be or how to check if cache was actually completed as intended is still not clear how to determin other than manually going through each inch of the map and see if we have an image displayed. this can be exhustive in caching basemaps of the whole contry.

6- even in the system generated expected number of files, at the lowest level for example at scale 1000, logically we would expect it to have the largest number of tiles as proven when caching previous levels, however, it is expected by the system to be less than the previous scale level. this adds ambiguity and some uncertainty in the results.


if you have insight on any of these issues or a related issue, i would be extremely thankful.

regards,
0 Kudos
ArtUllman
New Contributor
We are seeing similar inconsistencies.  We are generating caches for national scale mosaic dataset of 1m imagery.  We ended up with holes.  When we tried to fix the holes by supplying an AOI Polygon, it did not always work.  We switched over to using AOI Envelope.  We had better results with envelope, but it seemed to generate tiles for areas that were much larger than the envelope we supplied.  This is problematic for national scale imagery.  We kicked off a very small envelope yesterday, and it is still running.
0 Kudos
by Anonymous User
Not applicable
We switched over to using AOI Envelope.  We had better results with envelope, but it seemed to generate tiles for areas that were much larger than the envelope we supplied.  This is problematic for national scale imagery.  We kicked off a very small envelope yesterday, and it is still running.


This is �??as Designed�?�. AOI envelope generates the cache tiles for the extents that cover the minimum bounding rectangle of the AOI instead of the supertiles that intersect the polygon.

Would you be willing to share your data with us for testing?

Thank you,
Andrew
0 Kudos
by Anonymous User
Not applicable
1- no informative explanation why caching fails. reported error messages could be little bit more specific "cause-wise" then just failed to cache at extent XXX


There are several reasons due to which caching an extent can fail. In ArcGIS Server 10.1, we have tried to meet the above requirement by adding more details on the cause of failure for generic cases such as Data access issues, non �??availability of storage space etc.

However, in sporadic cases we have observed Maplex & labeling issues that can cause failures which cannot be specified.
We would recommend you to check the memory usage on your machine while the cache job is in progress. If available memory is running low, it is recommended to switch to a machine with more memory or to reduce the number of instances dedicated to �?�System>Caching gp�?� service.

If the above doesn�??t resolve the problem for you, we would like you to share your data with us, so that we can resolve this for you in upcoming Service Pack.

2- cache status GUI in both ArcCatalog and Server Admin page keeps displaying tile generation is in progress even though cache have already reported itself to have failed in arcCatalog and dispite multiple attempts to cancel cach. i am yet to find a way to get arcgis server understand that the cache processes was terminated/or failed. i would love to have a tool to force kill an indefinitely running caching service. the only way i think is to manually update the status.gdb geodatabase, which is what i am thinking of trying out.


Please use �??ManageMapServerCacheStatus�?� tool to update the Status.gdb file. We have addressed this issue in SP1.

3- cache import/export operations take considerably alot of time to execute even for smal subsets and even for a whole cache which is equivalent to a copy past operation with physical files.


Can you provide us details for the the following questions:
a.      Are you using AOI feature to restrict the cache gp tool to the small subsets you would like to import or export?
b.      Have you increased the number of instances for the caching gp service located in the systems folder?

The import & export tools are built to clip the cache to the feature set geometry. If you are using a complex geometry to import or export, it is recommended to simplify the geometry before running the cache tool.

4- cache update status tools takes considerably alot of time to execute and update cache status gdb. also, you cannot specify which level to update and you have to update all, which is not the case most of the times.


We are working on this requirement and it will be addressed in the upcoming SP

5- still need a mechanism to double check the cache completed. as we are using control shapes to govern what is being cached at different scale levels, system generated "expected number of tiles" is bound to differ from completed tiles as later should be less. however, by how much less or what should the number be or how to check if cache was actually completed as intended is still not clear how to determin other than manually going through each inch of the map and see if we have an image displayed. this can be exhustive in caching basemaps of the whole contry.


As described above, we are working on this requirement and it will be addressed in the upcoming SP. Meanwhile ,you can also use the Status.gdb file to determine the areas that are cached or not. This can be achieved by copying the status.gdb to a different location and looking at the job Status table.

6- even in the system generated expected number of files, at the lowest level for example at scale 1000, logically we would expect it to have the largest number of tiles as proven when caching previous levels, however, it is expected by the system to be less than the previous scale level. this adds ambiguity and some uncertainty in the results.


This can be a bug, can you open an incident with Esri Support Services and share your data with us.


Cheers,
Andrew
0 Kudos
ArtUllman
New Contributor
This is �??as Designed�?�. AOI envelope generates the cache tiles for the extents that cover the minimum bounding rectangle of the AOI instead of the supertiles that intersect the polygon.

Would you be willing to share your data with us for testing?

Thank you,
Andrew


Thanks Andrew.  I was expecting the caching process should use the MBR, but it appeared to be rebuilding well tiles outside the MBR of the feature class.  I was able to figure out the problem.  I originally created a Feature Class, and then modified features several times before using it as the envelope for rebuilding the cache tiles.  After modifying the Feature Class, I did not go back into ArcCatalog and "recalculate" the extent for the feature class.  As a result, I had one small feature in the FC, but the feature class had a very large extent based on features that no longer existed.  Recalculating the Extent should fix the problem.   Thank you for your assistance.
0 Kudos
NohyunMyung
New Contributor
I'm also seeing incredibly inconsistent results.

I'm building 2 caches at a national scale and I'm running into the following issues.

1) The cache status window will tell me that tile generation is in progress, but no new tiles are being generated after a refresh.  The completed tiles stay the same, the percent stays the same and the in progress column is indicating it's still building.  I've waited overnight and let my site churn this out to come back in the morning and nothing is new.  The job status says the job is done, with a 100% completion and no errors but those tiles still aren't completed on the cache status.

2) the cache process seems to skip/give up on certain scales. 
I'll have scales that are partially completed and the cache process will simply move onto the next scale and never return to complete the missing tiles in previous scales.  So the cache status is telling me that they're only partially complete, but again the job is saying the cache is 100% complete and with no errors.  The services themselves have obvious missing tiles as you zoom/pan around with no data.

Does anyone have any advice or has anyone been able to successfully build a national scale cache with significant data?  The cache I'm generating is a nationwide Navteq 2011 Q3 street dataset with basemaps and I have clients waiting for this service.

My site consists of 4 64 bit Windows Server 2008 Machines with 4 cores/8 CPU's @ 2.4GHz (32 total CPU's in the site) and each machine has 18GB of RAM.  I don't think my hardware should be the issue.
0 Kudos
GrahamSmith1
New Contributor II
My experience with 10.1 caching is similar to other posts.  The new tools are unstable and unreliable, with poor error reporting.  Typical first release...  It is frustrating that a relatively simple task in 10.0, which worked fine,  has become a convoluted web of trial and error to get a basic cache built.  I find I can build levels 1-5 when publishing the map service.  When I try to build levels 6,7 and 8, this is where the witchcraft of figuring out what approach doesn't break it and leave me wondering what is going on. I found one dramatic approach that seems to help, and that is to stop all the other map services and only run the one you want to cache. NOT very efficient.  Build the initial levels, and then add one at a time, and pray to the voodoo gods that it doesn't just mysteriously stop on its own.   It shouldn't be this way!  I have better use of my time than debugging pre-release state release software. Come on ESRI get this fixed with SP1!!!!
0 Kudos
RoyceSimpson
Occasional Contributor III
My experience with 10.1 caching is similar to other posts.  The new tools are unstable and unreliable, with poor error reporting.  Typical first release...  It is frustrating that a relatively simple task in 10.0, which worked fine,  has become a convoluted web of trial and error to get a basic cache built.  I find I can build levels 1-5 when publishing the map service.  When I try to build levels 6,7 and 8, this is where the witchcraft of figuring out what approach doesn't break it and leave me wondering what is going on. I found one dramatic approach that seems to help, and that is to stop all the other map services and only run the one you want to cache. NOT very efficient.  Build the initial levels, and then add one at a time, and pray to the voodoo gods that it doesn't just mysteriously stop on its own.   It shouldn't be this way!  I have better use of my time than debugging pre-release state release software. Come on ESRI get this fixed with SP1!!!!


I've experienced this as well.  Very frustrating caching building experience in 10.1.  If I only cache one scale at a time, it works fine.  If I cache many scale levels, only a few get done.  The only way to guarantee a full cache creation is to run each scale individually... which of course isn't really an option.
0 Kudos