IDEA
|
Require ability to simply index raster and point cloud (in LAZ format) content on disk and generate spatial indexes showing extent of content and basic high level metadata such as file name, file path, file size, spatial reference, etc. I regularly send raster (SID and GeoTIFF) and point cloud (always in LAZ) data to customers via hard drives. After copying the data down to the drive I like to verify everything is there and also include an index of what is actually on the drive for the customer. Currently I use QT Modeler to do this and I create indexes in both shp and kmz format. Customers receiving the data use a variety of GIS software and shp/kml provides flexibility for communicating the drive contents. A mosaic dataset is an overkill for this, and while I could write my own script to do exactly what is needed, it would be much better to have it built into the core product such that if another customer with ArcGIS Pro ran the same indexer I could reasonably expect that we would have the same schemas. So what I'd like to see is a simple indexing routine built into the core product, likely as a gp tool, where I can check off to generate data indexes in a variety of common formats FGDB, SHP, KMZ, GeoJSON, etc. I would also beg in supporting this you also add the full set of GDAL/PDAL libraries along with their Python bindings into the core product. That would solve your LAZ reading issues. Thus to make the indexing tool you'd only need to wrap up GDAL/PDAL commands. Thanks George
... View more
03-01-2024
08:36 AM
|
1
|
0
|
175
|
POST
|
@NormBetland , did you resolve this? I'm seeing a similar error with my setup. I narrowed it down to when I'm using Build Overview tools and have parallel processing enabled and building the overviews on one my organization's SANs from a VM in our data center. I don't get the error when I'm building it from the same VM to a Synology NAS. And I don't get it when I'm on bare metal to a local SAN. If I do single thread I don't have an issue with the VM to SAN. If I was doing small MDs single vs parallel wouldn't make a difference, but i'm dealing with 1000s of source rasters.
... View more
01-25-2024
11:28 AM
|
1
|
0
|
294
|
POST
|
Hey folks, so I came across this last night, my quick searching yielded no obvious results. Please read and let me know if anyone else has seen this. Problem: Given a table or feature class (probably anything built on a table), with a DATE field, when the date is calculated or updated via arcpy with the function datetime.datetime.now(), the value is visible in the attribute table to the second resolution and visible within auto-populated values when creating a def query or select by attributes query. If the def query is equals or included in the def query and select query yield no results. When the same field is populated manually in the attribute table the results are expected. When the calculation is performed with datetime.datetime.now().replace(microsecond=0), the results are as expected. My guess: The code that displays the date/time rounds to the nearest second for end-user visualization, but the underlying values in the GDB are at the microsecond and the the sql engine is taking in the values rounded to the nearest second since that's what the end user selected via the UI. To Replicate: Create a new file geodatabase. Create a Table with two fields a text field and a date field. Add three values to the table: Case 1/ Enter date/time manually via attribute table. Case 2/ Enter date/time via calculate field using datetime.datetime.now() Case 3/ Enter date/time via calculate field using datetime.datetime.now().replace(microsecond=0) Try to select each case using select by attributes with either = or includes values. Try using def query with same sql statements. Example with equals and IN (which displays as 'includes the value(s)'): mydatefield = timestamp '2023-06-08 23:00:37' mydatefield IN (timestamp '2023-06-08 23:00:37', timestamp '2021-08-30 00:00:00', timestamp '2021-08-31 00:00:00') Workaround: If calculating date/time values round to the nearest second. I wanted to put this in this forum before trying to submit a bug. Observed in ArcGIS Pro 2.9.5 and 3.1.2 Thanks George
... View more
10-20-2023
11:30 AM
|
0
|
0
|
621
|
IDEA
|
As a GIS Analyst, SME, Data Janitor, and General Plumber of Information Flow, I don't have the time, patience, or visual skills to create fancy dashboards. What I do have is the knowledge of what the underlying statistics represent along with their caveats and the ability to describe that statistics fitness for use when address certain problem sets. In addition there are a variety of customer use cases that need access to my statistical data, for example: External Stakeholders need high level summaries of my data repositories and data collection capabilities Internal Program Management needs a more real-time view of our operations Internal IT management needs forecasting for how much on prem storage we need All end users need the ability to filter based on a variety of spatial, temporal, and business qualities The statistics, maps, charts, and other elements that each use case needs have common elements, but they do not exactly align. For example program management may care about total area of data where as internal IT cares about storage size in GB. Both would care though about projected costs in storage. External stakeholders want to see more historical data than internal management, for example 'how much have you collected in my aoi'. In addition statistics have caveats, for example total area is based on a definition of how that area is computed. If we were dealing with drone imagery for example, the footprint's area may be significantly different than the extent's area. And the total area of the individual frames would be much less than a final orthomosaic. Thus the idea is to allow for typical individual elements of a dashboard to be stored independent of a dashboard, and to be selectable and added to each user's own dashboard/experience.
... View more
07-22-2022
12:45 PM
|
1
|
0
|
167
|
POST
|
Hi, I've been making Tile Caches using Mosaic Datasets as a source, but I'm running out of storage space for the source data for the mosaic dataset. Is there a preferred/recommended method for updating the tile cache? I could add the Tile Cache as a source in the mosaic dataset, or create a new mosaic dataset with the tile cache and the source data that will be used for updating. For example, if I have data from 2018 and 2020 overlapping. I've already created the tile cache with the 2018 data but I want to burn in the 2020 data on top. However, due to storage limitations, my 2018 data is no longer immediately available on my file system and has been removed from my Mosaic Dataset. If I just update the tile cache with the 2020 data, then the tile (in the tile cache) intersecting my 2020 data will be update with the 2020 data, but there will be a no-data collar area around it in the tile cache as the 2018 data is no longer in the mosaic dataset. Thanks George
... View more
01-08-2021
07:11 AM
|
1
|
0
|
265
|
POST
|
It was user error. I had been been scanning the bundles and just refreshing the layer representing their extents, but the bundles that I thought were skipped were just outside the range of the symbology. My mistake. Thanks for the reply though Michael. George
... View more
08-02-2018
06:59 AM
|
0
|
0
|
538
|
POST
|
Hi, Has anyone had any issues with creating using the Manage Tile Cache tool, and having it skip bundles? Using ArcGIS Pro 2.1.2 Source Data : Mosaic Dataset referencing MrSID files on an external usb 3.0 drive. Using Mosaic Dataset Boundary as input aoi. Output Data: Tile Cache Bundles at L20, ArcGIS/Bing/Google Schema, writing to an SSD Other details : using 8 for a parallel processing factor, system has 12 cores and has hyper-threading on, so it looks like 24, I know it is advised to turn off hyper-threading, but this is a corporate workstation so that is another bureaucratic hurdle I haven't started. See attached. Color ramp tiles are the extent of the bundles at L20. Green squares are the original mrsid footprints. I never had any problems on ArcMap, going to try that again. Thanks George
... View more
08-02-2018
05:58 AM
|
0
|
2
|
699
|
POST
|
I'm using the Grid Index Features tool in ArcGIS Pro 2.1.2. I had been using GridIndexFeatures successfully for years in ArcMap, and then noticed when I tried it in ArcGIS Pro I had issues, so I went back to ArcMap, but I'd like to get this resolved because the task is much faster in ArcGIS Pro (but the results aren't correct via python). I took the U.S. States feature class from the default ArcMap installation, added it to my map in ArcGIS Pro. Then selected Rhode Island and made a layer from it. Opened the GridIndexFeaturesTool (the usual GUI), set my output feature class, changed the environments to have a spatial reference of 32619 (UTM Zone 19N), modified the width/height to 10km, and then rounded the origin coordinates to the nearest 10000. The following is the python command for it (as copied directly from the 'Copy Python Command', just modified the username in the path): arcpy.cartography.GridIndexFeatures(r"C:\Users\MyUserName\Documents\ArcGIS\Projects\GridCreation\GridCreation.gdb\RIGrid_Test2ViaToolWindow32619env", "RhodeIsland", "INTERSECTFEATURE", "NO_USEPAGEUNIT", None, "10 Kilometers", "10 Kilometers", "-4660000 2090000", 642, 538, 1, "NO_LABELFROMORIGIN") Next in the python window, set the sr and then change the output to a new name... arcpy.env.outputCoordinateSystem = arcpy.SpatialReference(32619) arcpy.cartography.GridIndexFeatures(r"C:\Users\MyUserName\Documents\ArcGIS\Projects\GridCreation\GridCreation.gdb\RIGrid_Test3ViaPythonPrompt32619env", "RhodeIsland", "INTERSECTFEATURE", "NO_USEPAGEUNIT", None, "10 Kilometers", "10 Kilometers", "-4660000 2090000", 642, 538, 1, "NO_LABELFROMORIGIN") and the result is that the grids don't line up. See attached. The blackboxes are via the tool window (and are correct), the blue boxes are off by 3+km to th e NW. Grid origin is the same. What am I doing wrong? Seems like a bug. I would think that running the tool from the GUI and running the tool from the python window should yield the same results. Thanks George
... View more
07-11-2018
06:24 AM
|
0
|
0
|
438
|
POST
|
Hi, Was wondering if anyone has experience synchonrizing a mosaic dataset with only a subset of source tables/folders. For example I have Three source mosaic datasets: ..\A.gdb\Mosaic1 ..\B.gdb\Mosaic1 ..\C.gdb\Mosaic1 And one derived mosaic dataset ..\All.gdb\Mosaic1' The 'All.gdb\Mosaic1' has all of the rasters that are in the three source mosaic datasets. The three source mosaic datasets were initially added to the derived mosaic dataset using the 'Table' raster type. Now when I add new rasters to one of the source mosaic datasets, i.e '\C.gdb\Mosaic1' I would like to synchronize it with the derived mosaic dataset. However, I don't want to waste time also synchronizing with the other sources as 1.) they may be huge and 2.) I know nothing has changed. If I use a where_clause when I synchronize to a raster in the desired source mosaic dataset, will it add new items from the source's table? Thanks George
... View more
09-21-2017
07:42 AM
|
0
|
0
|
486
|
POST
|
Thanks, I've probably seen that part of the help 100s of times but never actually digested the 'String to TEXT' part in the note. Thanks George
... View more
08-23-2017
10:15 AM
|
0
|
0
|
1981
|
POST
|
Any idea why the input requirements for adding a string/text field is different than the 'type' attribute for a string/text field? For example: ---- myfc = r'C:\Foo\bar.gdb\somefc' arcpy.AddField_management(myfc, 'MyStringField', 'TEXT',5) myfc_pyobject = arcpy.ListFields(myfc,'MyStringField')[0] ------ myfc_pyobject.type would print out 'String' Seems like they should be the same. Thanks George
... View more
08-23-2017
06:36 AM
|
0
|
2
|
8420
|
POST
|
I realize this is an older thread, but for those in the future who come across it while troubleshooting... My 10.5 installation was working just fine on a Win 10 machine. This morning I started having the same problem. The only thing that changed was windows updates were run last night. I added a folder connection to a drive (C:\ and H:\) and then when I tried to browse the contents ArcCatalog would crash. I could browse the connections fine in ArcMap though. I deleted/renamed the C:\Users\...\AppData\Roaming\ESRI\Desktop10.5\ArcCatalog folder multiple times and still couldn't browse the directory. I also tried repairing the ArcGIS Desktop installation via windows and I still had the problem. Next I tried connecting to a subfolder (e.g. H:\Foo), and I could browse the contents. Very odd.
... View more
08-17-2017
06:03 AM
|
0
|
0
|
714
|
POST
|
I have mosaic datasets that reference a significant amount of imagery and elevation data. The mosaic datasets are shared as an image service on an arcgis server in a dmz. I can publish an image from my internal computer, but to get large amounts of data onto the server I need to send a drive to our IT department in another state, they copy it over, then i can login to the server and move it where it needs to go. Is there a way to push only new mosaic dataset items and update the image service? I have all the data on my local machine, and get new data on this machine. I have the folder on the server registered. But it seems that when I create a service definition file that either no data is copied (because it assumes that since the folder is registered the data should be there) or if the folder isn't registered it plans on copying up all data. I can login to the server and move data around, so I could publish each image as a new service, go in and then move it into the main image service, but that seems like more work. Essentially I'd like the system under the hood to say 1.) What data is on my local system? 2.) What identical data is on my registered folder on my server? 3.) What do I need to publish to my registered folder to make it look like my local system? 4.) Publish the new data. Preferably in Python. Any ideas would be appreciated. Thanks George
... View more
07-18-2017
12:58 PM
|
0
|
0
|
465
|
POST
|
Interesting problem, given that the typical workflow is to create contours from DEMs. Did you try using the 'Create Tin' tool? And then make the raster from a TIN?
... View more
09-01-2016
12:25 PM
|
0
|
1
|
546
|
POST
|
Attached python script can be used directly or as inspiration for calculating point elevations within a set of gridded polygons. It will likely require modification if the polygons are not in a gridded format. It is intended for loading within the ArcMap python window and was tested with WGS84.img raster (default install in ArcMap) and the georef15 shapefile (default install in ArcMap). User needs to create a feature class or shapefile prior to running the tool to hold the output spot elevations. The output fc shouhld have a field that matches a uniqueID/name of the input polygons (in the case of georef15 I used 'Code'), a field for output Elevation (default 'Elevation' type double), and a field for the extrema type (default 'Extrema' type string length 3). Options include the ability to do a negative buffer in percent (default 0.05, min is 0, max is 0.49999...) and to use a subgrid within the polygons (default is 2 so a 2x2 subgrid, change to 1 if you just want min/max within the original poly). Call the function with the arguments, example: createSpotElevations('WGS84.img','georef15','SpotElevations','Code',gInsideBufferFactor = 0, subgridSize = 1) Good Luck. George Note also posted within a modelbuilder response. Edit: Noticed I had the Min/Max Tags backwards.
... View more
09-01-2016
07:47 AM
|
0
|
0
|
574
|
Title | Kudos | Posted |
---|---|---|
1 | 01-25-2024 11:28 AM | |
1 | 03-01-2024 08:36 AM | |
1 | 07-22-2022 12:45 PM | |
1 | 01-08-2021 07:11 AM | |
1 | 09-01-2016 06:48 AM |
Online Status |
Offline
|
Date Last Visited |
03-01-2024
06:57 PM
|