Difference between PostGIS Geometry and ArcGIS Geometry

603
9
02-20-2024 02:53 AM
SaurabhUpadhyaya
Occasional Contributor

Hi 

As per the project requirement we are using Postgres db with Postgis extenion, in this GIS Vector data stored in this database( No GeoDatabase), now we connect this db using Database Connection and publish the services (Feature/ Map Service).

Now we are facing performance challenge, so times taking long time, we applied indexing gist(shape field) and btree(on non id field).

Can anyone suggest what should need to do? We do not want to use GeoDatabase bcoz other python services integrated on this.

0 Kudos
9 Replies
George_Thompson
Esri Frequent Contributor

Is it all tables in PostgreSQL that are slow or only certain tables?

How many records?

Do the records cover a large geographic area?

What is the geometry of the table?

Can you provide the version of ArcGIS Enterprise & PostgreSQL being used?

--- George T.
SaurabhUpadhyaya
Occasional Contributor

Hi, 

We have 3 Polygon layers which are saved in Postgres DB Version PostgreSQL 13.8 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 7.4.0, 64-bit, POSTGIS="3.2.3 0" 

ArcGIS Server 11.1.

In these layers one table approximate 20 lacs records which are using via ArcGIS Feature/ Map Service. 

Important Point is trying to improve the performance.

I used below indexing :

CREATE INDEX IF NOT EXISTS index_user_id
ON sde.table_name (user_id);


CREATE INDEX IF NOT EXISTS geom_shape
ON ssde.table_name USING gist(shape);

It is taking 20-26 seconds on PGAdmin, on filetring the records on the basis of user_id. In Featurer Later also applied the definition expression on the same user_id.

 

Can you please suggest, how to improve the performance.

Also need your help, what paramters can set on ArcGIS Server like Dedicated Instance, Max Records Count etc??

 

0 Kudos
George_Thompson
Esri Frequent Contributor

I am not sure how to specifically make it faster, but can say that if it takes 20-26 seconds in pgAdmin, then it would take around that long in a service. I would also think that 2,000,000 points would return faster than 20+ seconds, but that is not knowing all the complexities.

How long does it take to run a "select *" from the table with no filter / group by?

--- George T.
0 Kudos
VinceAngelo
Esri Esteemed Contributor

I'm mostly using AWS RDS, where the Esri sde.st_geometry type isn't available. I haven't ever seen performance issues with PostGIS geometry types, though I've always built appropriate indexes. 

I have seen poor performance with millions of spatially fragmented features (at 1m it's noticeable, after 10m it starts to impact functionality), but spatial fragmentation is storage-agnostic, and impacts all storage implementations (shapefile, file geodatabase, Oracle,  SQL-Server, PostgreSQL, Informix, DB2,...).

More details would be necessary in order to distinguish whether the issue was with the storage format or the underlying data. Among the information needed:

  • Exact versions of PostgreSQL and PostGIS
  • The number of tables involved (both with and without geometry columns)
  • The number of features in the largest table
  • The geometry type (point/line/poly)
  • For line/polygon, the average number of vertices in each feature
  • The frequency of update on the features
  • The average number of features returned on the average query
  • A description of the performance problem using numbers (e.g., "It takes 14.7 seconds to return 40,123 features in GeoJSON format over a window that extends 17% of the total area") 

- V

SaurabhUpadhyaya
Occasional Contributor

Hi, 

We are using Portgres RDS for this purpose.

PostgreSQL 13.8 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 7.4.0, 64-bit

POSTGIS="3.2.3 0" 

ArcGIS Server 11.1.

Our application is developed on ArcGIS JS API Feature Layer (Version 4.28). in this application we are consuming polygon layers (Not Enterprise GDB), and connect on Pro and published the service.

In these layers one table approximate 20 lacs records which are using via ArcGIS Feature/ Map Service and for one time we are fetching on the basis of user_id and one time approx 1.60 lacs. It is taking 20-26 seconds on PGAdmin, on filetring the records on the basis of user_id. In Featurer Later also applied the definition expression on the same user_id.

I used below indexing :

CREATE INDEX IF NOT EXISTS index_user_id
ON sde.table_name (user_id);


CREATE INDEX IF NOT EXISTS geom_shape
ON ssde.table_name USING gist(shape);

 

Also need your help, what paramters can set on ArcGIS Server like Dedicated Instance, Max Records Count etc??

 

Can you please suggest, how to improve the performance.

 

0 Kudos
MarceloMarques
Esri Regular Contributor

@SaurabhUpadhyaya - see documentation below.

Tune services using best practices—ArcGIS Server | Documentation for ArcGIS Enterprise

| Marcelo Marques | Principal Product Engineer | Esri |
| Cloud & Database Administrator | OCP - Oracle Certified Professional |
I work with Enterprise Geodatabases since 1997.
“ I do not fear computers. I fear the lack of them." Isaac Isimov
0 Kudos
VinceAngelo
Esri Esteemed Contributor

Any query returning over a hundred thousand rows of 2 million is always going to be slow, especially through a service that only returns 2000 records at a time, in a format that's much, much larger than the raw data. This has nothing to do with PostgreSQL and everything to do with throughput. 

- V

0 Kudos
MarceloMarques
Esri Regular Contributor

@SaurabhUpadhyaya -

You need to provide a lot more details, the versions of ArcGIS Pro and ArcGIS Server, the version of PostgreSQL and PostGIS, how you published the Services using the PostgreSQL database (No Geodatabase), how many layers were published together, the number of rows of each layer, etc, etc.

But here are a few tips. 

1. Tune your map documents in ArcGIS Pro before you publish, need to add the reference scales to render the layers, also see the symbology, and if using attributes to drive symbology then make sure those layers have the columns indexed, same if using definition queries and/or Query Layers.

2. Identify which Map Service is slow then add to ArcGIS Pro and then trace the https request to see exactly where it is slow. Is slow because the ArcGIS Server number of SOC processes is low?  or Is slow because of the PostgreSQL database?

3. If it is the PostgreSQL database then increase the number of connections in the PostgreSQL configuration file, check the PostgreSQL server cpu, memory, disk, network utilization, if normal then the next steps is to do a database tracing in PostgreSQL to capture the SQL statements of the slow Map Service, then analyze the SQL statements to find which SQL or SQL's are slow and then tune the SQL's to improve performance.

4. Perform regular PostgreSQL database maintenance, e.g. vacuum, gather new statistics, rebuild indexes.

5. Keep PostgreSQL and PostGIS updated with the latest patches.

community.esri.com - How to Upgrade the PostgreSQL and PostGIS version for the Enterprise Geodatabase

I hope this helps.

| Marcelo Marques | Principal Product Engineer | Esri |
| Cloud & Database Administrator | OCP - Oracle Certified Professional |
I work with Enterprise Geodatabases since 1997.
“ I do not fear computers. I fear the lack of them." Isaac Isimov
0 Kudos
SaurabhUpadhyaya
Occasional Contributor

Hi,

PostgreSQL 13.8 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 7.4.0, 64-bit

POSTGIS="3.2.3 0" 

ArcGIS Server 11.1 and ArcGIS Pro 3.1

1. In ArcGIS Server 11.1 published feature service and that are consuming in ArcGIS JS API(4.28) to render using Feature Layer with Definition Expression.

2. Mainly 3 Feature Services, and all are published on seperated rest api urls using related feature class.

Can you share how to improve the performance, we are just trying to load layers, search, thematic type simple operation.. No Editing..

 

 

0 Kudos