Tuning map services on 2 memory hungry servers

5596
20
08-20-2015 09:06 AM
ChadKopplin
Occasional Contributor III
0 20 5,596

This week I have seen that our 2 GIS Servers with 8 gb's of RAM / server, was using 91% of the RAM / server.  I saw that there was always a SOC instance open when a map service was not in use.  Here is where the issue begins.  We have 56 map services / server, and the open SOC's are anywhere from 80 - 120 mb's.  We were using just over 7 gb's of RAM just to have a SOC available for the map services, there was not enough space available for processing or very little processing.

I needed to tune the map services to not have any SOC's open once the client was finished using the map service.  Here are the steps for adjusting the instances on the map service.

1.  Open ArcCatalog

2.  Make an administrative GIS server connection

     a.  Click GIS Servers

     b.  Click Add GIS Server

     c.  Select Administer GIS server and click next

     d.  Next add your server (i.e. http://<my_gis_server>:6080/arcgis)

     e.  For Authentication, I use my ArcGIS server account

     f.  Click Finish

3.  In the list of servers double click on your administrative connection to connect to the server

4.  Right click on the service that you want to change the available instances

5.  Select Service properties

6.  On the left side of the Service Editor dialogue, click on Pooling

7.  Under Specify the Number of Instance change the minimum number of instances per machine to 0

8.  Click on OK

This procedure will not leave any open instances for the map service once the instance is not being used.  The map service will create the SOC automatically once a client starts to use the service.

I set all of my map services minimum instances to 0 and I will monitor the use to see if a should increase on a case-by-case basis.  I have noticed that the map services are responding quicker.  I am beginning to test if the better speed is due to the extra RAM, or if a bug might be there in the use of the existing instance.  I will let you all know what I find out.

20 Comments
CassidyKillian
Occasional Contributor II

Thanks Chad.  I need to look into this as we were experiencing a super heavy workload on our Server.  I noticed that there was an unusually high number of SOC instances running in task manager.  I will switch the minimum to 0 and see if that helps.

Keep us posted on your results as well.

nicogis
MVP Frequent Contributor

You can try use low isolation and you can have it's advantages also if Esri documentation strongly recommended that you use high isolation.

Low isolation improve ram consume but:

- you couldn't have good optimize pool-shrinking

- generally slightly less performant than high isolation services

- if fail a instance service go down other instance services in same process

I advise use Capacity Planning Tool  for dimension core/ram/service:

http://wiki.gis.com/wiki/index.php/Capacity_Planning_Tool_updates

useful link Esri: System monitor 1.1.6​ and System Log Parser​ and System Test

MichaelVolz
Esteemed Contributor

What version of ArcGIS Server do you have installed?

ChadKopplin
Occasional Contributor III

We are using 10.2.2 current for our server.

MichaelVolz
Esteemed Contributor

Do you have a means of tracking the usage of your AGS services?

I ask because ESRI has now provided out-of-the-box (OOTB) usage tracking in ArcGIS Manager that is available starting at v10.3.0.  In addition, I believe ESRI has made ArcGIS Server more efficient so it may run better in 10.3.x if you can upgrade at this time.

ChadKopplin
Occasional Contributor III

Thank you for the information, Yes we have means for tracking the usage.  For our servers we wanted to soc accounts to close when the client is finished using the service to free up the resources for other services.  Currently we have ~55 map services and using the default settings when creating the services we were using from 3.5 to 7 GB of RAM just to keep the soc accounts open and ready.  We saw our servers production and response time slow down, so this fix allow us to use the RAM more efficiently, then we are working on other configuration and tuning settings specific to each service to increase their response on a case by case basis.  Thank you for your suggestion, This Spring I hope to upgrade the server to 10.3.x

MichaelVolz
Esteemed Contributor

v10.4 is supposed to be available in 1st quarter of 2016, so you might just want to jump straight to v10.4 to potentially get access to even better functionality and efficiency of the AGS environment.

v10.2.2 has a bug where it does not delete temporary bundle files if the cache operation crashes which could cause misleading error messages as storage for operating system processes is limited as the temporary cache bundle files build up.  Not sure if this applies to you, but I thought I would pass this information on anyway.  You can find information about this bug in the forums if you perform a search.  This has been resolved as well at v10.3.x.

ThomasColson
MVP Frequent Contributor

I've found that by setting initial pooling to 0, that the first user attempt at a connection fails much more frequently than if 1 process is kept spawned. This is especially true if the data, the server, and the client are not on the same LAN backbone, and plenty of Wireshark logging confirmed this. Performance Analysis of Logs (PAL) Tool - Home  suggested that faster disks on both the database server and the ArcGIS Server might improve things a bit. I also set pool recycling to start at around 4 AM, and stagger them every 5 minutes or so, so the recycling occurs just before folks get to work. And finally, I've set the IIS app pool recycling for the web adapter to fire every 12 hours, instead of the default 29. It really hasn't done much (asking for faster hard disks was about as well received as asking for a diamond-studded work phone), but probably 5% improvement. The thing about the Esri capacity planning tool, is that the authors assume that GIS system administrators have bottomless budgets. With many, if not all, production GIS servers ending up in Cloud Service Providers or remote data centers, where the price of 12 GB of RAM suddenly includes the cost of 2 years of salary and the parking garage fees of the guy installing it....I'd love to see Esri enable some more tweaking of Tomcat memory management. A single SOC shouldn't lock up 170 MB of RAM with no use.

VinceAngelo
Esri Esteemed Contributor

8Gb RAM for 56 services isn't all that much, considering Windows consumes 3Gb for just the OS (it works out to just  45.7Mb per instance with two instances per service).  "Server class" processing hosts should have at least 16Gb RAM (my last database server purchase had 256Gb RAM). The next home PC I buy will have 16Gb RAM, so I'm due to review RAM pricing to see if 24Gb or 32Gb is the next "server" minimum.  If your server can't afford to have all services at their maximum instance count without exhausting RAM, then some research is needed into access frequency to review proper instance maximums and RAM requirement.

The drawback to dropping the minimum services to zero is increased latency for initial service access.  It's great for infrequently accessed services (alternate basemaps and the like), but it may impact service time for services which are being accessed at an interval slightly larger than the maximum idle time (and CPU system load from the added shutdown/startup effort for those services).

- V

nicogis
MVP Frequent Contributor

Can help you reducing Maximum time of idle instance that you can see in service Timeout settings  in the Pooling tab for each service in Server Manager the default is 600 sec). ArcSOC not in use anymore terminate quicker and Windows has more resource available.

JohnHaddad
Esri Contributor

Hi Chad, how have you been?

Pooling is a good way to reduce unnecessary RAM usage by idle services. A drawback to setting the minimum instances to 0 is that the first person to hit that service will have to wait for it to start up (after it's gone completely idle). To counter that, you can dial up the maximum time an idle instance can be kept running (although the default is pretty generous - 30 mins).

With limited RAM, it's also a good idea to take a close look at the way that your services have been authored. Wherever possible, your dynamic map services should contain only the layers necessary for interactivity (queries, visualization, etc). Those services can then be used on top of cached basemaps, which use very little RAM (none if you're using online basemaps). If you publish 'all-in-one' map services with all layers being drawn dynamically, your server will use more RAM than is necessary.

John

ChadKopplin
Occasional Contributor III

Cassidy,

I have seen an improvement in our services as a whole, I have seen where I needed to use the default settings for certain services.  We manage them case-by-case.  Thank you.

ChadKopplin
Occasional Contributor III

Thank you for your suggestion, I will look into it for our servers.

ChadKopplin
Occasional Contributor III

Thank you John, I am seeing that with one of my services that has 8 different stream and water body layers looking at Recreational Use of the streams and waterbodies across the state.  They are all turned on at the same time at the state level (it does not open very quickly when zoomed out)  Is there a way to utilize caches when zoomed out at the state and county level, but eventually render dynamically once I am zoomed into like 1:250,000?

JohnHaddad
Esri Contributor

Since streams and waterbodies don't change (relatively speaking) often, I would recommend caching them at all scales. This would take some time for the entire state, but unless there are significant changes to the underlying data, you shouldn't need to rebuild the cache anytime soon. Alternatively, you could prebuild cache tiles for the most popular areas of the state, and then set the service to build tiles on demand for the less browsed/visited areas.

If you don't want to do that, then you could still use the approach that you suggest. However, you would then need to manage two services instead of one.

Regardless of which approach you take, I would also look at configuring layer visibility at those smaller scales. If the data's attributes allow it, perhaps you could use definition queries to only draw major rivers and the largest water bodies at the smaller scales. Your map readers should only need to view minor streams and smaller waterbodies at the largest scales.

These kinds of authoring techniques benefit dynamic map services in terms of reducing your server's load, and they also reduce the time and resources needed to build and update cached map services. Here's a list of tips and best practices:

Tips and best practices for map caches—Documentation | ArcGIS Enterprise 

ChadKopplin
Occasional Contributor III

Thanks John this gives me more potential options for this particular service.  I will let you know what I find out.

DuarteCarreira
Occasional Contributor II

Hi all.

We also have had trouble with managing AGS memory requirements (Win 20008R2 x64, AGS 10.5). I can't imagine what it's like to handle only 8gb memory...

What we ended up doing was to consolidate the most used map services into a single map service. This reduced memory usage drastically. Some obvious caveats to be aware of though, but still performance is very good for our usage patterns, and mapservice crashes are not an issue.

A few more details... we have a few web apps that used dedicated map services. Each map service would spawn 2 arcsoc minimum. To reduce these, we created a new mxd with a grouplayer for each web app. Each web app would then filter layers to only those within its corresponding layer group.

We configured this consolidated map service as low isolation, with enough maximum instances so as to avoid high maximum response times. We monitor memory usage and map service response times (min, avg, max) and # of failed requests (it's 0).

With this setup we see a very stable memory usage. We reduced from 80% to 40% memory usage.

We have several other map services that we did not consolidate, but did configure as low isolation with varying number of max instances.

Overall we have 36 instances running consuming 6.5GB working set memory. This varies to a maximum 45 services and 8.7GB working set mem. The OS has 20GB ram installed, and shows 57% occupied in task manager.

by Anonymous User
Not applicable

Duarte, in general do you believe it would it be faster for ONE viewer that has 600 layers in it (our main internal viewer for local government) to have one service with all the layers, or, about 12 mxd's / services separated into groups i.e. Infra, Transportation, Parcels, etc etc.  To break it up?  I had thought separating them would speed up load time... Currently our main viewer is slow.  All I know is when I train people and 10 people start clicking things it grinds to a halt. RAM  we have 96 gig.  All mxd services point to the SDE.  Which is versioned. And which hundreds of Desktop users also access. The viewers point to layers on the same SQL db but a 'Read Only' separate SDE that pulls the other layers together to a Read Only SDE to use in ArcMap and web viewers, other SDes of the layers are for editing. Apparently, viewers should link to separated fGBDs not SDE.  Also all services were set to 1 as min pool so I put them to 0 for hundreds of seldom used services to save RAM. I was reading the Pool should be n+1 for n being # of cores but I read on the Server Software Performance - GIS Wiki | The GIS Encyclopedia apparently that was only for Batch processes ... it said 3 to 5 times the number of cores. So should I set my max Pool to 15 for big services? I need to get MXDperfstat, Capacity Plan Tool and System Monitor or some IIS logger next. 

DuarteCarreira
Occasional Contributor II

Sorry to only answer today... just noticed your reply now.

I wouldn't consolidate 600 layers. But testing is always best.

If you have 96GB ram and that's not a limitation yet, than don't consolidate into a single mxd. Maybe study what would be an intermediate consolidation plan. Hundreds of seldom used services sounds like a nightmare. Could they be put into a single or a few mapservices?

I use MXDperfstat to see if performance issues exist with a single user.

I don't have nearly the number of users you do. You can try to monitor mapservices statistics, like max wait time, # of failed requests, to see how things work under different configurations.

/blogs/clarity/2015/09/14/tools-to-monitor-your-servers-and-services 

About server statistics—ArcGIS Server Administration (Windows) | ArcGIS Enterprise 

To simulate 10 simultaneous users you can use a web stress tool, and see how the server responds to increasing users. esri has one, I never tried it since it seems a bit overkill to me:

http://www.arcgis.com/home/item.html?id=e8bac3559fd64352b799b6adf5721d81 

There are lots out there... I've used JMeter in the past.

As you test your mapservices keep an eye on the server's CPU, memory and bandwidth usage. You'll get an idea of what your bottleneck is/are. Also be aware that your testing PC should not saturate CPU or bandwidth. If it does, divide your testing into several PCs.

by Anonymous User
Not applicable

Yes thank you for mentioning this. Useful tips. I have not used Perfstat yet but plan to along with System Monitor and monitoring IIS. In theory you'd think publishing once, using everywhere would be most efficient. example: parcels.  We have it in most viewers. But why publish it in several dozen MXDs one for every viewer, when we could have the layer as one service, used across all viewers. And just give it 16 instances. Symbolize from the webmap or Portal. It would seem that's the direction Esri wants us to go. I will be testing, indeed, and reporting back... 

About the Author
I am from Miller, SD and have a BS degree in Wildlife and Fisheries Science from SDSU. I have been working with the Esri suite of software for about 29 years. I enjoy the outdoors and looking at a map and picturing myself there. So, when I look at a TOPO map 2D I start getting a 3D view in my head. Using GIS I have found something that absolutely I love doing with the computer. Currently, I am the GIS Manager for the Wyoming Department of Environmental Quality. Currently, we are running ArcGIS Server Enterprise with ArcGIS 10.7.1 in a SQL 17 environment.