POST
|
Hi, The description for the "Storytelling Text & Legend" hosted app template states that you should be able to add multiple tabs to include many web maps in tabs. However, when I share a map as a hosted app using that template, and add the extra web map IDs in the customisation screen, they don't show up. When I download the template code and modify the javascript directly to include the other map IDs, the tabs show up & the app behaves as expected. This is quite a nice template and a good replacement for the retired "Storytelling tabbed" template. I like the display and the options to display the text and/or legend. As such I'd like to be able to use it in a hosted fashion. Is this a known bug in the hosted configuration? Is it designed as intended and the item description is misleading?
... View more
02-19-2013
12:36 PM
|
0
|
1
|
330
|
POST
|
Hi Kevin. I would use the Near tool or a spatial join to add a variable to your points indicating the distance to the park, then run the Ordinary Least Squares tool to determine whether there's a relationship between distance and property damage. The spatial statistics help and seminars (including the recent one) will be of use to you in understanding how to use and interpret the results of the OLS tool to determine whether the variables are related. I know that's a short reply but I hope it helps. -Thom
... View more
02-17-2013
07:46 PM
|
0
|
0
|
152
|
POST
|
What I have now is a date selection calendar that I can use to select the appropriate date. Do you have this operating now? That would be really useful for a tool I'm working on, too. Would you mind sharing how you did it? Edit: Hahaha, I never even noticed there was a "Date" parameter type that provided a date picker! That's super useful! Just when you start to think you've got this ArcMap thing under control...
... View more
11-22-2012
03:05 PM
|
0
|
0
|
796
|
POST
|
I can't answer the actual question (i.e. why is that behaviour occurring?) but I can say that I've had to do a similar task (find all records in same feature class within Xkm of each individual records). I found SelectLayerByLocation too slow, so I went about it a different way: Use the Select tool to extract the current record into a temp IN_MEMORY fc containing a single point Create an empty geometry object Use the Buffer tool to buffer the temporary point FC into the geometry object Clip the original points fc using the geometry object as the clip features. The results of the clip contains the features within X km of your original point. I guess it depends on what you're doing with the output, this prob won't work if you're looking to calculate a value back into the original FC or something. Although you could build up a dictionary of {source: [neighbours]} which you could then use to re-iterate over the original FC and apply the attributes I suppose. Some sample pseudocode:
arcpy.env.workspace = "IN_MEMORY"
buff_geom = arcpy.Geometry()
with arcpy.da.SearchCursor(points_fc,cur_fields) as cur:
for row in cur:
current_row_oid = row[0]
# Select the point we're looking at into "tempTarget"
current_row_selector = '"{0}" = {1}'%.format(oid_field, current_row_oid)
arcpy.Select_analysis(points_fc, "tempTarget", current_row_selector)
# Get a geometry containing an $analysis_distance buffer from current point
buff_geom_list = arcpy.Buffer_analysis("tempTarget",
buff_geom,
"%s Meters"%analysis_distance)
# Extract the nearby properties into "tempNearby"
arcpy.Clip_analysis(points_fc, buff_geom_list, "tempNearby")
No idea if that will help you but thought I'd post it out of curiosity. Avoiding using feature layers altogether might mitigate the issue. I also found this noticeably faster (3-5x) than using the SelectByLocation tool. No idea how this will behave when you run it from ArcMap though, I was just using it externally. I'd still be very curious to find out why it's happening and how to fix it.
... View more
11-22-2012
03:03 PM
|
0
|
0
|
661
|
POST
|
You can... but should you? 😉 Don't forget the old axiom - it's twice as hard to debug code as it is to write it; so if you write code as cleverly as you can, you are by definition not smart enough to debug it... 😉 Just kidding, that's actually a really nice, concise way of dumping a feature class into a dict. Between that and the with... context, arcpy is really starting to feel Pythonic!
... View more
11-08-2012
02:21 PM
|
0
|
0
|
453
|
POST
|
Hi, I have a set of points representing crimes, and want to show all clusters of specific crime categories which fall within a space/time window. For example, "show me where there are clusters of vehicle thefts which occurred within the same 1km area in 24 hours". From a crime analysis point of view, the question that I'm hoping to answer is "where have bursts of activity occurred, and which incidents were involved in them, so I can go back to my database and investigate those incidents". I've played around a lot, and looked into the help for Space-Time Cluster Analysis and associated pages, but it's not quite answering my question. I've used the generate spatial weights matrix tool to specify a 1 km, 1 day neighbour matrix. I've then converted that to a table, which gives me a list of which features are adjacent (in space & time) to which other features. If I run that process once per category, that's my "clustering analysis" complete, really - I don't want to do any statistical stuff from here. Will I have to write my own tool from here to show these clusters on the map (e.g. as MBRs around the points in question)? I guess it's a relatively small step from the neighbour table to such a tool, but I don't want to write something if I don't have to. I feel like this is either a lot easier than I'm making it out to be, or a lot harder. I have a niggling feeling I should be able to do the same thing with just SQL. If ArcGIS had multidimensional indexing then it could be a 4D buffer/union. Is this a 'hidden feature' that I don't know about? I guess I could hack time into the Z coordinate but that's a bit... well, hacky. Thanks for your help. (Cross-posted from the spatial statistics subforum)
... View more
11-08-2012
02:07 PM
|
0
|
1
|
409
|
POST
|
In the interest of learning new ways to skin cats: The enumerate() function is a good way of looping over iterables while counting how far along you are. Also, don't forget 10.1 uses Python 2.7, so you can use dictionary comprehensions to get rid of the ubiquitous dict(zip()) calls. So you could reduce it to: fieldDict = {field: index for index, field in enumerate(cursor.fields)} Barely a change, really, but a bit cleaner.
... View more
11-07-2012
05:28 PM
|
0
|
0
|
453
|
POST
|
There are a couple of other options, too. You can use the Point Statistics tool in the Spatial Analyst toolbox using a 1x1 cell neighbourhood & the SUM statistics type to count the number of occurrences in each raster cell. If you don't need a raster, you could use the Create Fishnet GP tool to make regular polygons over the region, and then do a spatial join to count the number of points. You'll have to look at the SUM statistic in the result of the spatial join tool to count the number of occurrences. Hope that helps.
... View more
11-07-2012
04:30 PM
|
0
|
0
|
202
|
POST
|
Thanks Eric. Very useful. I forgot about the Zonal Geometry tool �?? that's a much better solution. I'd also remembered how to do the export to an older version. I had problems using the in_memory workspace for rasters in ModelBuilder, although it works for Python scripts. Is there any chance you can explain why it's more effective to use the Radius2 parameter than clipping out the raster? Does the R2 parameter apply a dynamic mask to the layer or something? I presumed that an issue with using the R2 parameter is that you're limited to using the units of the SR of the input raster, which doesn't make for a very versatile tool, and much elevation data isn't projected so it'd be tricky to accurately calculate a distance in DD for each feature. Is that not true? I can see that the raster clip operation is probably expensive and best avoided, but I don't see a reliable way of getting a consistent distance otherwise. Or can you work around that by using the output coordinate system environment setting?
... View more
09-30-2012
03:42 PM
|
0
|
0
|
804
|
POST
|
Hi, When using the Mapping Clusters tools, it's important to have a question in mind before you choose which tool to use. For example, if your question is "which households have an illness rate which is higher than we'd expect, compared to their neighbours?" you'd use the Hotspot Analysis tool. If your question was "which households have higher (or lower) values than we'd expect, and are surrounded by households with lower (or higher) values than we'd expect?" then you might use the Cluster and Outlier Analysis tool. I'd suggest you think specifically about exactly what you want to ask of your data, and then refer to the documentation to find which tool answers your question. It's entirely possible that you want to ask a question which isn't directly answerable by any of the tools in the spatial statistics toolbox! From what you've described, if I were you I'd be using the Cluster and Outlier Analysis tool. That will tell you where there are groups of low values, or a high value surrounded by low values.
... View more
09-26-2012
05:42 PM
|
0
|
0
|
308
|
POST
|
Hi Chris, I made a model which seems to do what you want. A bit simpler than a script, although I can't speak to the speed or efficiency, especially over thousands of points (though it shouldn't be too bad, I'd be surprised if the R/GRASS combo wasn't faster). I made it in 10.1, so if you have 10.0 you might not be able to open it... I'm not sure how to save toolboxes as older versions, or if it's even possible. Here's a picture of the layout if you need to reproduce it. [ATTACH=CONFIG]18028[/ATTACH] The process is exactly the same as Jeffrey outlined above. The expression in the calculate value tool is %visible_cells% * %cell size%. The rest should be self-explanatory if you're familiar with modelbuilder. Hope that helps 🙂
... View more
09-26-2012
05:09 PM
|
0
|
0
|
804
|
POST
|
So you want to partition the points into 50-point clusters based on distance, then draw a convex hull around each cluster? Sounds like a custom tool to me. Everything I know about K-means suggests that you can only specify the number of classes, not the number of points which will reside in each class. Can you just run the spacetime clustering tool with K = number of points / 50? Even that won't guarantee 50 points per class, though. You might be able to do it by generating a spatial weights matrix file with the KNN conceptualisation with K=50. You'd have to write a bit of glue code to read in the SWM and make the MBRs. But it won't be exclusive; that is, you won't get groups of 50 coherent points; each point will have 50 neighbours, and some of those neighbours will be shared with other points. You won't get something that looks like your diagram. I'd be surprised if there wasn't an algorithm to do this, but I don't know what it is. I guess you could do something like Assign all points to one of n/50 classes at random (where n = number of points) Calculate the centroid of each class Assign each point to the class with the nearest centroid (we're just doing K-means so far) Count number of points in each class, and redistribute them so classes are equal (this is the hard bit). I guess for classes with >= 50 points, you could "lock in" the 50 closest points, and redistribute the remainders using the same algorithm excluding the centroid they are technically closest to. GOTO 2 until convergence (ie. average distance doesn't change very much) Are you able to share why you want to do this? It sounds interesting.
... View more
08-23-2012
11:05 PM
|
0
|
0
|
408
|
POST
|
Hi eskindir, The key question here is "what do you consider to be 'between'". If you mean "directly between", can you draw a line (Points to Lines tool) between the point of interest and each house, and then intersect that line with the households feature class? That would tell you how many (if any) of the houses were exactly in line. You could make this a little "fuzzier" by buffering each of the households by a small amount (say 10m) and then doing the above. It's crude, and may be intensive if you have lots of data, but it should work. If you mean "between" in any other sense ("north of the house and south of the feature", for example) then you'll have to take a different approach. Let us know exactly what you're trying to find out, ideally with an example, and we'll be able to be more helpful. Thanks 🙂
... View more
08-23-2012
10:29 PM
|
0
|
0
|
105
|
POST
|
Hi Chris, Can you provide a bit more detail about your problem? Are your sheep locations stored as a point feature class or in a raster grid? What are you using as a visibility surface input to your viewshed? How are you currently performing the task? Have you written a script? What kind of "interference" do you mean? Exactly what final outcome are you trying to achieve? Sounds like a lot of questions but I think everyone will be able to provide better help with a little bit more detail 🙂 Generally speaking, if you're using a DEM as the visibility surface and have the sheep locations as a points feature class, I can't see why a simple script to loop over each point wouldn't do what you're asking. Cheers!
... View more
08-23-2012
10:24 PM
|
0
|
0
|
804
|
POST
|
Fixed this issue. RasterEngine.dll had been flagged by my virus scanner (Avast!) and quarantined without telling me. I restored it to its position and that resolved the issue. My colleague ran the file through a few different online virus scanners and no faults were found so it's either a very good virus or a false positive 😉 Anyway, hope that helps someone - and maybe Esri can take a look at why it got flagged in the first place?
... View more
06-13-2012
11:09 PM
|
1
|
1
|
442
|
Title | Kudos | Posted |
---|---|---|
1 | 08-10-2011 10:42 PM | |
1 | 06-13-2012 11:09 PM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|