DOC
|
I'd like thank Cameron Everhart, one of our technical consultants from Professional Services Delivery for working on this particular challenge with me. A customer was receiving data with a polygon specified as a collection of Longitude / Latitude coordinate values. The collection was received as a string and they wanted to coerce that String into a Geometry. Using GeoEvent Server Field Calculator processors to evaluate a series of nested replaceAll( ) functions, we were able to do just that. The string manipulation made possible with regular expression pattern matching supported by the replaceAll( ) function is incredibly powerful. We start with the following input. Note that each coordinate value is separated by a single space and each coordinate pair is separated by a <comma><space> character sequence: Our solution uses regular expressions to match patterns in the input string and three Field Calculator processors, configured with replaceAll( ) expressions, to manipulate the input string. Note that we are using the "regular" Field Calculator processor, not the Field Calculator (Regular Expression) version of that processor. Our goal is to transform the string illustrated above into an Esri Feature JSON string representation of a polygon geometry. You will want to review the Common Data Types > Geometry Objects topic in the ArcGIS developer help to understand how polygons can be represented as a JSON string. You will also want to review the "String functions for Field Calculator Processor" portion of the help topic for the GeoEvent Server's Field Calculator processor . When represented as Esri Feature JSON strings, polygons specify coordinate values within a structure of nested arrays. One thing we need to do is identify and replace all of the <comma> <space> character sequences in our input string with a literal ],[ character sequence. We can do this with the following regular expression pattern match and literal string replacement: RegEx Pattern: ', '
Replacement: '],[' Incorporating this into a replaceAll( ) expression, we can configure a Field Calculator to evaluate the expression. The reference "polygon" in this expression identifies the attribute field holding the input string in the event record being processed: replaceAll(replaceAll(polygon, ', ', '],['), 'POLYGON ', '') Notice that the expression invokes the replaceAll( ) function twice. The result from the "inner" function call (replacing all literal <comma><space> with a literal ],[) is used as input to an "outer" function call which replaces the unwanted literal string POLYGON (with its trailing space) at the front of the string with an empty string. This first expression, with its nested calls to replaceAll( ), performs the following string manipulation: -- Input --
POLYGON ((-114.125 33.375, -116.125 32.375, -115.125 31.375, -113.125 31.375, -112.125 32.375))
-- Output --
((-114.125 33.375],[-116.125 32.375],[-115.125 31.375],[-113.125 31.375],[-112.125 32.375)) Next, we take the output from this first expression and configure a second Field Calculator to replace the literal <space> between each pair of coordinates with a <comma>. The ArcGIS developer Feature JSON string specification for a polygon requires that the coordinates of each vertex are expressed as a comma delimited pair of values (X,Y). The second Field Calculator expression replaceAll(polygon, ' ', ',') takes the input string illustrated below and produces the indicated output string: -- Input --
((-114.125 33.375],[-116.125 32.375],[-115.125 31.375],[-113.125 31.375],[-112.125 32.375))
-- Output --
((-114.125,33.375],[-116.125,32.375],[-115.125,31.375],[-113.125,31.375],[-112.125,32.375)) A final pair of regular expression pattern matches can now be used to replace the (( at the front of the string and the )) at the end of the string with the required array bracketing and spatial reference specification. The first pattern match uses a ^ metacharacter to anchor the pattern match to the start of the input string. The target of this match is the double parentheses at the beginning of the string. The second pattern match targets the double parentheses appearing at the end of the input string. The $ metacharacter is used here to anchor the pattern match to the end of the string. Our final expression will appear more complicated at first, mostly because the literal replacement strings are a little longer. I'll try to pull the expression apart to make it easier to understand. replaceAll(replaceAll(polygon, '^\(\(', '{"rings": [[['), '\)\)$', ']]], "spatialReference": {"wkid": 4326}}') The inner replaceAll( ) has our first regular expression pattern: replaceAll(polygon, '^\(\(', '{"rings": [[[') Back-slash characters are used to 'escape' the pair of parentheses in the pattern. This is required to specify that the parentheses are literally rounded parentheses and not the start of what regular expressions refer to as a capturing group. This pattern and replacement is adding the required array bracketing and "rings" specification to the string representation of the polygon. The outer replaceAll( ) has our second regular expression pattern: replaceAll( . . . '\)\)$', ']]], "spatialReference": {"wkid": 4326}}') Back-slash characters are again used to 'escape' the pair of parentheses in the pattern. The pattern and replacement in this case appends the required closing brackets for the polygon coordinate array and includes an appropriate spatial reference for the polygon's coordinates. The expression nests an "inner" call to replaceAll( ) within a second "outer" invocation. The result from the "inner" function call, manipulating the beginning of the string, is used as input to the "outer" function call which manipulates the end of the string. The expression could be simplified, perhaps, by using separate Field Calculator processors to handle each pattern and replacement, but the leading and trailing parentheses anchored to the beginning and end of the string seemed to beg for the replacements to be done serially. Here is the string manipulation being performed in this final step: -- Input --
((-114.125,33.375],[-116.125,32.375],[-115.125,31.375],[-113.125,31.375],[-112.125,32.375))
-- Output --
{\"rings\": [[[-114.125,33.375],[-116.125,32.375],[-115.125,31.375],[-113.125,31.375],[-112.125,32.375]]], \"spatialReference\": {\"wkid\": 4326}} We can now use a Field Mapper processor to cast this final String value into a Geometry. Here is an illustration of a GeoEvent Service with the three Field Calculator processors and Field Mapper processor routing output to a stream service. The stream service was used to verify the produced string could be successfully cast to a Geometry and displayed as a polygon feature on a web map. If you found this article helpful, you might want to check out these other threads which highlight what you can do with the Field Calculator processor, the Field Mapper processor and regular expression pattern matching. How to switch positions on coordinates GeoEvent 10.9: Using expressions in Field Mapper Processors
... View more
Wednesday
|
2
|
1
|
163
|
POST
|
@Jay_Gregory -- While this use case might not be a great fit for GeoEvent Server, the ArcGIS Online product for real-time data processing, ArcGIS Velocity, supports both real-time analytics and scheduled batch analytics. There is a Calculate Distance tool in the ArcGIS Velocity proximity toolset which can be configured to join a calculated distance onto processed data records in order to "enrich" them with the linear distance to the closest comparison feature. Maybe something to consider for the future ...
... View more
01-09-2024
09:20 AM
|
0
|
0
|
156
|
POST
|
@Justin_Greco -- I was able to check with one of the developers and confirm that the original GeoTab implementation did not anticipate that an organization might have different administrative units or groups each with their own GeoTab account. The latest release of the connector (Release 11 - July 5, 2023) does not support multiple 'databases' within the cached data. You could post a comment to the connector's page in the GeoEvent Server Gallery requesting an enhancement, but I cannot say when or if the enhancement would be picked-up for development.
... View more
01-09-2024
09:04 AM
|
1
|
1
|
173
|
POST
|
@Jay_Gregory -- I don't think this is going to be possible using the geometry processors available out of the box for GeoEvent Server. The Intersector processor, for example, would be a better fit if you had a number of inbound event records with polyline geometry and wanted to know what portion(s) of each polyline intersect the wildfire burn area (given a polygon for the wildfire's area). The Difference Creator processor, on the other hand, would be used to clip or remove some portion of a wildfire's burn area that intersects an event record’s geometry -- not really useful if the inbound event record has a point or polyline geometry. I would expect you would want to run this sort of analysis as polygon vs. polygon. I cannot convince myself that the Symmetric Difference Creator processor will be of any help here. Given that the wildfire locations are provided as point locations (not polyline perimeters or polygonal burn areas) and the lack of a 'Find Nearest' tool, there is really no opportunity to perform a useful intersection analysis if the facility data is also point locations. A point can be evaluated to see if it intersects an area, but two points are very rarely (like "never") going to intersect one another at the exact same coordinates. You could create elliptic buffers for the wildfires, or the facilities, or both I suppose. But especially for the wildfires, a geodesic buffer would be a very poor model of a fire's behavior and direction of expansion. There simply is no way to take terrain or weather into account when creating the buffer. And then you would have to run the analysis iteratively using rings of buffers to determine which facilities are within: (a) 2km of a fire; (b) 5km of a file; (c) 10km of a fire? Seems like a pretty poor substitute for trying to find the nearest point to another point. This also doesn't feel like a great fit for real-time data processing in general. If you were able to query every hour to get an updated polygon model for a wildfire's burn area, that would be one thing. But I wouldn't think a given wildfire's point location would be likely to change in real-time, and how many new wildfires are going to be posted in an hour, or even within an 8-hour shift? It just seems you would be better off bringing the two point layers in to a web map and using the available 'Analysis' tools to run a 'Find Nearest' from the Proximity set of operations at several different times during your day.
... View more
01-08-2024
06:56 PM
|
0
|
1
|
171
|
POST
|
@Moi_Nccncc -- To expand on what @Amir-Sarrafzadeh-Arasi suggests, yes, you want to use geofences to capture the buffered areas around the point locations of the trucks from your AVL. You need to do this in two steps: Use a Buffer Creator processor to construct an elliptical buffer around the last-known / latest-reported position of a truck. Allow the processor to replace the truck's point location (geometry) with the computed polygon. Then push the event record with its polygon geometry out as a feature record so that a Geofence Synchronization Rule can use the feature record to add / update a geofence in the GeoEvent Server's registry of known areas. In a second GeoEvent Service, use a GeoTagger processor to identify which geofences (elliptical areas) a given Truck's point location intersects. The name of each polygon geofence should be the TRACK_ID (e.g. vehicle name or identifier) of the truck whose location was used to create the buffered area. Now, you have a few things to consider. First is the rate at which AVL data is coming into your GeoEvent Server. Using a feature service to store feature records which are queried to update geofences will introduce significant latency in getting the geofences updated. The alternative is to use a stream service to broadcast the feature records with their elliptical geometries. A Geofence Synchronization Rule can be configured to subscribe to a stream service so the data is "pushed" into the geofence registry rather than requiring the synchronization rule to "poll" or query the feature records from a geodatabase. You also have to recognize the fundamental race condition (when receiving an updated AVL point location) between using that point location to both update a polygon geofence and processor the point location to determine if it intersects any other geofences you've created / updated from other truck point locations. You have to accept that the last-known / latest-reported position of a truck can only be compared to established geofences and allow some time for the geofence synchronization to make those updates to the geofence registry before trying to determine if the "latest" point location intersects a geofence. How you choose to output the polygon buffer feature records matters. It will take some time to create an elliptical buffer, write the constructed geometry out as a feature record to a feature service, and a synchronization rule to then retrieve that feature record and update the GeoEvent Server's geofence registry. If you use a feature service for your synchronization you must not set the synchronization rule's Refresh Interval too aggressively. You cannot, for example, expect GeoEvent Server's Geofence Synchronization Rule to query a feature service every second and update the geofence registry -- not when are also expecting to create and update those polygon feature records and ingest, adapt and process the AVL location data records to determine intersections with the geofences. The default for geofence synchronization using a feature service is to query the feature service once every 15 minutes. You might set the Refresh Interval to run as quickly as once a minute, but I would not set it to run any more frequently than that. I would probably use a stream service to drive the geofence synchronization to minimize the latency in updating the geofence registry. A final consideration is that a vehicle's point location will most likely intersect that same vehicle's buffered location (e.g. geofence). The first GeoEvent Service is creating the polygon buffers and driving updates into the geofence registry. The second GeoEvent Service is receiving the same point locations and using the established geofences to determine intersections. You will probably want to use something like a Field Splitter processor to split the comma delimited list of geofence(s) names produced by your GeoTagger and then Filter to discard any event record where the geofence name matches the truck's TRACK_ID. You only want to keep event records where a truck's point location intersects some other truck's buffered location.
... View more
01-08-2024
04:01 PM
|
1
|
0
|
225
|
POST
|
@DanaDebele -- The fields Shape__Length (applicable for feature records with a polyline geometry) and Shape_Area (applicable for feature records with a polygon geometry) contain geodatbase managed attribute values. Their values are computed by the geodatabase when editing the feature geometry. You should not include them in a GeoEvent Definition as there are no values you can query or calculate which will transfer to the output feature record when attempting to add / update feature records with an Add a Feature or an Update a Feature output. Feature attributes such as Shape__Area and Shape__Length are similar to attributes like objectid, or oid and globalid -- you will see them listed in a feature service's schema when reviewing the feature service specification in the ArcGIS REST Services Directory, but they are not attribute fields you specifically add to a feature service or whose values you modify when editing feature records via a web map. You cannot write to (or overwrite) an object identifier or global identifier value using GeoEvent Server. You also cannot write or overwrite a shape's geometrically computed area or length. You should remove these attributes from your GeoEvent Definition so they are not included in the data sent to an outbound connector. Once removed you should see their original values preserved as you use GeoEvent Server to update specific attribute field(s) you want to use to indicate things like an e-mail notification has been sent for a particular feature record. You can review the blog article Using a partial GeoEvent Definition to update feature records for additional discussion on this. One of our Esri Support analysts, Nicole, recently added a comment to the article to detail a solution she configured which does pretty much what you want to do -- set an attribute field value to indicate a feature record has been processed (or in your case, that a notification e-mail has been sent).
... View more
01-08-2024
03:11 PM
|
0
|
0
|
116
|
POST
|
@JeffYang2023 -- I don't know if this will help. In an unrelated discussion looking at one of GeoEvent Server's processors which can potentially create a large number of threads, I was told that a user can only create so many threads. The commands below were run on a Mac, so I assume there are similar commands you can run within your Linux (?) environment. ulimit -u 5568 sysctl kern.num_threads kern.num_threads: 40960 sysctl kern.num_taskthreads kern.num_taskthreads: 8192 Does this mean that a process owner on a given machine can only instantiate around 5500 threads, while the kernel can handle perhaps 41000 threads? I'm not sure, but the limit for a number of "task" threads is much less, and all three are orders of magnitude less that the 2,000,000 event records you observe being ingest before seeing the DefaultReactiveExecutor engine log the exception. I was pointed to this article on stackoverflow which suggests that you may have reached a limit on the number of open files, if the process thread(s) are consuming a large number of file descriptors / handles. the article suggests punching the ulimit up from 5k to 65k ... but I really don't know what the ramifications of that might be for overall system stability. You also might take a look at this article from mastertheboss.com which walks through some suggestions for addressing the Java OutOfMemoryError. What input have you configured to receive the sensor data? I'm assuming it is either the Receive JSON on a WebSocket or the Subscribe to an External WebSocket for JSON input? Is the data being ingest by the input in one extremely large batch? Or is the data coming in as several batches with some period of time between batches (and it is not until you eventually reach 2 Million records you see the error)? I'm wondering if the issue is that hundreds-of-thousands of data records received all at once as a single massive batch of data is causing the issue, versus a potential resource leak in the inbound connector you're using where you can receive 250 data records each second and it takes over 2 hours to receive a sufficient number of data records to trigger the exception. === I would advise that if you need to use the -Xms and -Xmx switches to increase the JVM RAM allocation that you set their values the same. This disables dynamic memory allocation. The way Java works, if you set a minimum of 1g and a maximum of 16g, when Java determines the JVM needs to be resized it will instantiate a new (larger) JVM and copy the running state into the new instance. This isn't as much of a problem if the system creates a 4g instance to copy over a currently running 2g instance (temporarily consuming 6 gigs). But if it were trying to dynamically scale 12g to 16g? That's a lot of temporary memory being consumed to copy data from one JVM to another. It is reportedly more stable, if you really need that much memory, to allocate a minimum (-Xms) to 16g and set the maximum (-Xmx) to 16g as well -- to prevent the dynamic resizing.
... View more
01-08-2024
10:52 AM
|
0
|
0
|
97
|
POST
|
@MaxBöcke -- I've seen what you describe (and illustrate in your attached PNG) before. In my experience when the GeoEvent Manager web application is not listing feature services I know were created to provide access to data in a spatiotemporal data store's Data Source, and I try to browse to the feature service's page in the ArcGIS REST Services Directory, I see something like an HTTP/500 error returned. The feature service (for whatever reason) has been corrupted or its endpoint cannot be reached at REST, so queries made by web clients like GeoEvent Manager fail when they attempt to discover and list the web services. I can confirm that an output such as Add a Feature to a Spatiotemporal Big Data Store or Update a Feature in a Spatiotemporal Big Data Store are only used by web clients who need to query to retrieve a feature record set for display (for example) on a web map. GeoEvent Server interrogates an Enterprise portal (using a registered server connection) to discover whether or not the portal's hosting server has a registered spatiotemporal data store and obtain the Elasticsearch credentials needed to make a direct connection. You can expect, then, that GeoEvent Server will be able to add / update feature records in a spatiotemporal data store even if you create a Data Source and leave the checkboxes used to publish a map and/or feature service unchecked. GeoEvent Server is not using the web services for data access. You could try deleting the Enterprise portal hosted content items (e.g. the map and/or feature service listed in the Enterprise portal content manager web application) and then use GeoEvent Manger to create a new map / feature service for the existing spatiotemporal Data Source. That should preserve any data and simply create new web services for client applications that need them to access the data. If this doesn't work I would encourage you to open an incident with Esri Technical Support so that they can work through the issue with you.
... View more
01-08-2024
09:55 AM
|
0
|
0
|
161
|
POST
|
@Moi_Nccncc -- Data from sensors, such as the GPS supporting automated vehicle location (or AVL) solutions, is not generally held in-memory by GeoEvent Server. GeoEvent Server was designed around assumptions of frequent observational data reported from sensors and reliable periodic data reports. Whether that is data being sent to GeoEvent Server via HTTP/POST requests or GeoEvent Server polling a REST API via HTTP/GET requests, the processors and filters in a GeoEvent Server can generally only act upon data that is actively being received. Vehicles which cease to send location records generally cannot be included in any sort of real-time notification because there is no input to process. You might think to try configuring an Incident Detector to do what you want to do. An Incident Detector is used when you want to monitor the duration of some condition, such as a tracked asset's location detected "inside" an area of interest (or geofence). An Incident Detector will emit an event record, whose status is 'Ended', if the processor observes at least one event record satisfying its opening condition and does not receive an event record it can use to update its status before the processor's configured expiry time. The default expiry time is 300 seconds. So, it is possible to receive exactly one event record whose geometry is inside a given geofence, route that event record through an Incident Detector to create a new incident (whose status is 'Started') and then stop receiving any additional input from tracked assets (e.g. vehicles). The Incident Detector by default will update its incident's duration until it sees 300 seconds have passed without receiving any additional data and signal this by emitting an event record with the vehicle's TRACK_ID and a status 'Ended'. The administratively ended incident will have a duration 300,000 milliseconds greater than the '0' duration recorded when the incident was opened (when the vehicle was first detected 'Inside' the geofence). This really is not how an Incident Detector processor was intended to be used however. The processor expects to process event records it receives and update its incident's status. It will emit an event record when it "gives up" having not seen any new data within its expiry window, but this is part of its error handling not nominal use. You must also consider that the processor is configured with both an "opening condition" and a "closing condition". The "closing condition" is only checked if the "opening condition" is false. It is not obvious, but given a number of non-overlapping and adjacent geofences you might guess that a single event record routed to an Incident Detector can be used to satisfy either its opening or its closing condition. In other words you cannot use an Incident Detector to process a single AVL data record and detect both that the vehicle's location has "exited" a geofence to the left of an adjacent geofence that the vehicle has just "entered". This nuance is one example of conditions you have to think about when using processors to evaluate spatial conditions such as "enter" and "exit" when simultaneously trying to monitor conditions used to "open" and "close" incidents. It is far better to record observations such as "entry" and "exit" and the date/time these observations are made in a geodatabase rather than trying to hold the information in an in-memory cache and evaluate future conditions based on data cached from prior observations. There's lots of reasons why GeoEvent Server is not designed to cache and hold onto data from event records it has received. I'll try to mock-up an approach to show you what I'm thinking, but you will want to consider a solution which relies on an RDBMS trigger configured to run every couple of minutes to look for data records which have an "entry_time", but do not have an "exit_time", and identify those records whose date/time entered is at least 10 minutes relative to "now". Have the RDBMS trigger flag the data record as one which needs to have an notification sent. You can then use GeoEvent Server to query the flagged records. You can configure the GeoEvent Server input to delete the flagged records from the database as they are queried, so e-mail or SMS notifications are only sent once.
... View more
01-05-2024
05:42 PM
|
0
|
1
|
141
|
POST
|
@Nikhil_Kommidi -- It would be good to know which release of GeoEvent Server you are running and whether it is running on a stand-alone ArcGIS Server (not federated with an Enterprise portal), or if an Enterprise portal is part of the architecture, what role the ArcGIS Server used to run GeoEvent Server plays. I'm confident that the ArcGIS Server platform services you mention are not part of the problem. The ArcGIS Server 'Synchronization_Service' used to run an instance of Zookeeper is only referenced by GeoEvent Server when initializing its configuration following a fresh product installation (or following an administrative reset). The ArcGIS Server platform service is checked only to see if there is an old GeoEvent Server configuration in there which might be imported and upgraded to your current release. After that GeoEvent Server uses an instance of Zookeeper managed by the GeoEvent Gateway and does not make further use of the ArcGIS Server platform service. ArcGIS Server is constantly communicating with the GeoEvent Gateway. If the GeoEvent Gateway service is stopped (or has crashed) the GeoEvent Server service needs to be stopped. GeoEvent Server cannot run without its Gateway managing the Apache Kafka message broker and Zookeeper distributed configuration store. If you are looking at the .../GeoEvent/data/log/wrapper.log and see that the GeoEvent Server's JVM has been shutdown, that means GeoEvent Server is not running (regardless of the state of the GeoEvent Server service shown in the Windows MMC Services console). If the JVM is not running, your GeoEvent Server is not receiving, adapting or processing real-time data. You also won't be able to launch the GeoEvent Manager web application. It is likely that when you try to start the GeoEvent Gateway it attempts to coordinate its Kafka topics with the Zookeeper configuration. When that fails the GeoEvent Gateway cannot initialize and shuts down. The Kafka and Zookeeper managed by GeoEvent Gateway are very tightly coupled. Kafka cannot do its job without Zookeeper (and vice versa). If you see indications that a file beneath C:\ProgramData\Esri\GeoEvent-Gateway\zookeeper-data already exists and this is interfering with the GeoEvent Gateway initializing either its Kafka or Zookeeper ... I can only guess that something has corrupted the Gateway's runtime files. I can offer the advice that creating system restore points using a VM snapshot (for example) is not a reliable way to backup your GeoEvent Server. A snapshot of a VM is not "application consistent" for Esri software. GeoEvent Server in particular may fail to restart following a revert to a VM snapshot if real-time data was actively being ingest, adapted, processed, and/or disseminated when the VM snapshot image was taken. When running normally the GeoEvent Gateway is actively writing data to disk -- a VM snapshot may capture an inconsistent replica or internal state which causes one or more Kafka topics to become corrupted. I do not like recommending an administrative reset as it is the most destructive remedial step you can perform, particularly prior to the 11.1 release when the reset obliterates any Input, Output, GeoEvent Service, GeoEvent Definitions and other configurable elements you have created using GeoEvent Manager. However, if files which exist beneath C:\ProgramData\Esri\GeoEvent-Gateway are interfering with stopping and restarting the GeoEvent Gateway, an administrative reset really is your only option.
... View more
01-05-2024
03:06 PM
|
0
|
0
|
216
|
POST
|
@AdamRepsher_BentEar -- Testing 10.9.1 (Patch 4) my advice would be to configure your GeoFence Synchronization Rule with the Replace All GeoFences in Category checkbox unchecked. Changes made to address BUG-000089545 were not included in any 10.9.1 patch nor any of the earlier release patches. Changes to the logic for geofence synchronization for this bug were first incorporated into the 11.1 release. A couple of things to pay particular attention to when testing: The GeoEvent Manager's list of registered geofences needs to be periodically refreshed. The easiest way to do this is click 'Site' (which takes you back to the 'GeoEvent Definitions' page then click 'GeoFences' to force that page to refresh. You need to do this to make sure you are seeing an accurate list of the geofences currently in the GeoEvent Server registry. Geofence Synchronization runs every nn minutes from the point you click 'Synchronize' when saving your GeoFence Synchroniztaion Rule. I've found that the timer can slowly creep due to normal latency. This means that a synchronization you might expect to run 5 seconds after each minute, after a couple of hours, might be running at 15 or 20 seconds after each minute. To account for this, when testing, I try to wait two full synchronization cycles before checking to see if feature records I deleted resulted in corresponding geofences being removed from the GeoEvent Server registry. At the 10.9.1 release I had to explicitly enter a WHERE clause 1=1 into the Geofence Synchronization Rule's Query Definition parameter. This was something that was fixed in the 11.1 release. Once you upgrade to the 11.1 release, because logic associated with the Replace All GeoFences in Category checkbox changes, my advice is to delete and reconfigure your GeoFence Synchronization Rule to make sure the Replace All GeoFences in Category checkbox is checked.
... View more
01-05-2024
02:25 PM
|
1
|
0
|
287
|
BLOG
|
Hello Debbie -- I am going to request we take any further troubleshooting through Esri Technical Support. From what you've shared I'm assuming that the GeoEvent Definition you've configured your Poll an External Website for JSON to create for you (e.g. SamsaraLocationsIN) recognized the attribute time in the data received from Samsara as a numeric value. When allowing an input to create a GeoEvent Definition for you, numeric values are adapted as a Double (because a Double is the most generic way to handle a numeric value). If indeed the time in the Samsara data record is a 13-digit epoch expressing milliseconds -- not a 10-digit epoch expressing seconds -- then you should be able to simply copy the SamsaraLocationsIN GeoEvent Definition the input created for you and edit your copy to specify that the attribute value time should be adapted as a Date. You should then delete the auto-generated SamsaraLocationsIN GeoEvent Definition and reconfigure your Poll an External Website for JSON input to use your copy of the GeoEvent Definition, the one you've tailored specifically to adapt time as a Date. You also might want to review the comments in the following Esri Community article which present and discuss the adaption of different representations of date/time values: Timestamps received from a sensor feed display differently in GeoEvent Sampler, ArcGIS REST Services queries, and ArcGIS Pro
... View more
10-16-2023
12:59 PM
|
1
|
0
|
558
|
BLOG
|
At the 10.8.1 release there limited options for using a Field Calculator or Field Mapper processor to explicitly cast a data value from a numeric value to a Date. The inbound connector you are using, however, should be able to adapt the received value as a Date rather than as a Double or String. If you can provide me a sample of the data being received and screenshots of how you've configured the inbound connector and the GeoEvent Definition that connector is using, I can probably suggest a way to adapt the Samsara data you are receiving. -- RJ
... View more
10-13-2023
11:23 AM
|
0
|
0
|
574
|
DOC
|
This article provides additional information and examples for the Poll an ArcGIS Server for Features input's Method to Identify Incremental Updates parameter. Please refer to the input's on-line help topic for usage notes, a description of the input's other parameters, as well as considerations and limitations of this input. See Also: - Polling feature services for "Incremental Updates" (Updated August 2023) - Question regarding "Incremental Update" workarounds, custom components? - Using a partial GeoEvent Definition to update feature records The Method to Identify Incremental Updates parameter has four options as described in the Parameters table in the input's on-line help topic: ObjectID – GeoEvent Server will cache the greatest object identifier from the feature record set returned from a map/feature service poll. Only features whose object identifier is greater than the value cached from the last poll will be included in the next poll. Timestamp since last-received to newest-feature timestamp – GeoEvent Server will cache the greatest date/time value from the feature record set returned from a map/feature service poll. Only feature records whose timestamp is greater than the value cached from the last poll will be included in the next poll. Timestamp interval between last query time until now – GeoEvent Server will construct a temporal query with a lower-bound and upper-bound. The lower-bound will be the date/time the last query was executed. The upper-bound will be the date/time “now”. Only feature records with a timestamp within the temporal query’s range will be included in the next poll. The feature record timestamp is taken from a specified attribute field. Timestamp interval between last query time with overlap until now – GeoEvent Server will construct a temporal query with a lower-bound equal to a specified number of seconds before the last query was executed and an upper-bound equal to the date/time “now”. Only feature records whose timestamp is within the temporal query’s range will be included in the next poll. When using the Poll an ArcGIS Server for Features input’s Get Incremental Updates capability, if you configure GeoEvent Server logging to include DEBUG messages from the feature service inbound transport, you will see detailed messages which identify the query expression and/or temporal query GeoEvent Server constructs to limit which feature records are polled. (Option) - Timestamp since last-received to newest-feature timestamp (A) 1=1 and last_updated > timestamp '1969-12-31 23:59:59' (B) 1=1 and last_updated > timestamp '2023-08-07 13:29:00' The logged message (A) indicates the input has no cached timestamp value. In addition to honoring the input’s default Query Definition 1=1, the input has constructed a query to include any feature record with a epoch date/time greater than “the beginning of time” defined as January 1st 1970 (Midnight UTC). This is expected to retrieve any feature records which a valid (non-null) date/time stamp. The logged message (B) indicates that, of the feature records returned in the previous poll, the records with the greatest timestamp were those whose date/time attribute held a value “Aug 07 2023 13:29:00 UTC”. Only feature records greater than this value are included when polling using the cached timestamp. (Option) - Timestamp interval between last query time until now (A) 1=1 and last_updated >= timestamp '1969-12-31 23:59:59' and last_updated < timestamp '2023-08-08 01:19:05' (B) 1=1 and last_updated >= timestamp '2023-08-08 01:19:05' and last_updated < timestamp '2023-08-08 01:20:06' (C) 1=1 and last_updated >= timestamp '2023-08-08 01:20:06' and last_updated < timestamp '2023-08-08 01:21:06' The logged message (A) indicates that the input has no cached timestamp value. In addition to honoring the input’s default Query Definition 1=1, the input has constructed a query to include any feature record with a epoch date/time greater than “the beginning of time” defined as January 1st 1970 (Midnight UTC), but less than the current time “now” when the query was executed. The logged message (B) indicates that the input last polled for feature records “Aug 08 2023 01:19:05 (UTC)” and has therefore assigned that as the lower-bound for the constructed temporal query. The upper-bound is set to the current time “now”. The expectation is that any feature records with a valid (non-null) date/time between the lower-bound and upper-bound will be retrieved. The logged message (C) indicates that the input last polled for feature records “Aug 08 2023 01:20:06” and has therefore assigned that as the lower-bound for the constructed temporal query. The upper-bound is set to the current time “now”. The expectation is that any feature records with a valid (non-null) date/time between the lower-bound and upper-bound will be retrieved. Reviewing these logged messages we recognize that the input is polling for input every 60 seconds. We recognize that the ArcGIS Server’s managed geodatabase, in this case, is rounding date/time values to the nearest second -- otherwise the logged messages would include millisecond values. Also note that the upper-bound in the logged messages (A) and (B) differ by one second. There is inherent latency (some number of milliseconds) between the time GeoEvent Server determines the current time “now” (setting the temporal query’s upper-bound) and when a result is returned from the database. The input's polling is conducted approximately “every 60 seconds” depending on operational latency within GeoEvent Server (as data records are ingest, adapted, and processed) as well as between solution components (e.g. ArcGIS Server and ArcGIS Data Store). (Option) - Timestamp interval between last query time with overlap until now (A) 1=1 and last_updated >= timestamp '1969-12-31 23:59:59' and last_updated < timestamp '2023-08-08 02:21:15' (B) 1=1 and last_updated >= timestamp '2023-08-08 02:21:05' and last_updated < timestamp '2023-08-08 02:22:45' (C) 1=1 and last_updated >= timestamp '2023-08-08 02:22:35' and last_updated < timestamp '2023-08-08 02:24:15' The logged message (A) indicates that the input has no cached timestamp value. In addition to honoring the input’s default Query Definition 1=1, the input has constructed a query to include any feature record with a epoch date/time greater than “the beginning of time” defined as January 1st 1970 (Midnight UTC), but less than the current time “now” when the query was executed. The logged message (B) indicates that the input last polled for feature records “Aug 08 2023 02:21:15 (UTC)”. The input, in this case, was configured with an additional offset parameter Timestamp overlap duration in seconds set to 10 seconds, so the input has constructed a temporal query with a lower-bound 10 seconds earlier than its last query. The computed lower-bound is “Aug 08 2023 02:21:05 (UTC)”. The constructed temporal query’s upper-bound is set to the current time “now”. Any feature records with a valid (non-null) date/time between the lower-bound (with its offset) and upper-bound will be retrieved using this query. The logged message (C) indicates that the input last polled for feature records “Aug 08 2023 02:22:45 (UTC)” and set a lower-bound offset by the 10 seconds specified by the input’s Timestamp overlap duration in seconds parameter. The computed lower-bound is “Aug 08 2023 02:22:35 (UTC)”. The constructed temporal query’s upper-bound is set to the current time “now”. Any feature records with a valid (non-null) date/time between the lower-bound (with its offset) and upper-bound will be retrieved using this query. Reviewing the logged messages in this third example we recognize that the input is polling for input approximately every 90 seconds -- there is a 90 second difference between the upper-bound values in each of the constructed temporal queries. You can use the following GeoEvent Server component logger to configure logging to include DEBUG level messages which include the information illustrated above for the Poll an ArcGIS Server for Features input’s Get Incremental Updates capability: com.esri.ges.transport.featureService.FeatureServiceInboundTransport
... View more
08-11-2023
04:35 PM
|
0
|
0
|
884
|
POST
|
I moved this thread from the GeoEvent Server Questions over to ArcGIS Pro as it sounds, to me, like this has more to do with a difference in behavior when adding hosted feature layers backed by a spatiotemporal data store to an ArcGIS Pro project vs. adding the layer to an Enterprise portal web map. We might want to consider taking 'GeoEvent' out of the thread's title. cc: @jill_es
... View more
07-03-2023
11:40 AM
|
1
|
1
|
384
|
Title | Kudos | Posted |
---|---|---|
1 | 01-05-2024 02:25 PM | |
1 | 01-09-2024 09:04 AM | |
1 | 01-08-2024 04:01 PM | |
1 | 10-16-2023 12:59 PM | |
1 | 07-03-2023 10:08 AM |
Online Status |
Offline
|
Date Last Visited |
yesterday
|