POST
|
Okay, I fixed the problem by using the ModelBuilder and set the bigger raster as environments for extent, snap and mask.
... View more
09-29-2016
02:12 AM
|
1
|
0
|
894
|
POST
|
Cool, thank you! This is working: Con(IsNull("smaller"),"bigger", "smaller"), using 9999 as value for the smaller raster, than the tool ‘int’ and the tool ‘extract by attributes’. But there is another big problem now: The extension of the bigger raster is the same than the extension of the smaller raster. How can I fix this? Thanks for responding
... View more
09-29-2016
01:59 AM
|
0
|
1
|
894
|
POST
|
How to extract a bigger raster with a smaller raster in the way that the bigger raster has NoData where the smaller raster cells are? I want to extract the raster cells from a big raster with the raster cells from a smaller raster which is overlapping with the bigger raster. Unfortunately, I cannot find a tool for that in ArcMap 10.3.1, but is there a way to do it? I do not want to subtract the values, but there should be noData in the new bigger raster where the bigger raster is overlapping with the smaller raster, like a hole in the bigger raster with the extent of the smaller one. I tried this: Con(IsNull("smaller"),"bigger","bigger" == "smaller") But I really need to put the values from the smaller raster extension into noData in the new raster. Thank you.
... View more
09-28-2016
07:15 AM
|
0
|
5
|
1782
|
POST
|
Thanks a lot for your answer. This is what I also did. I made a raster out of the polygon-feature. This is my input for the focal statistic with variety over an id-field which I had to create for the raster. I used also the tool focal statistic with sum for the whole area. The last step was to divide the focal stat from the raster (out of the polygones) with the focal stat (from the whole area).
... View more
01-25-2016
02:54 AM
|
0
|
0
|
2690
|
POST
|
Here is a shot, the ID of the elements should be the input and it should count how many different features are in a circle but with having the whole area as basis. The pixel size is 100x100 meters, so quite a lot for whole Germany. Kernel density is not an option, because I do same point density’s with point-features as input and like to sum (cell stat) the point density with the density of the polygons (when I know how to do itJ).
... View more
01-20-2016
01:37 AM
|
0
|
2
|
2690
|
POST
|
Perhaps I should make a fishnet, intersect the polygons with the fishnet, and then feature to point. So I get more points depending how big an area is. Then I can use the tool point density with variety. I have to make other analyses with the result.
... View more
01-19-2016
08:34 AM
|
1
|
4
|
2690
|
POST
|
I need the same like the output of the line or point density but for polygenes. And I need the polygons and not points out of the polygons, so this is unfortunately no option. What I need is a neighborhood (circle) from 1000 meters and how many polygons are in this neighborhood relating to the whole area. The area is whole Germany and the polygons are resting places and parking sites. Thanks for answering
... View more
01-19-2016
08:24 AM
|
0
|
1
|
2690
|
POST
|
Hello, I have to calculate a magnitude-per-unit area from polygone-features that fall within a neighborhood around each cell. Is there a tool like Point Density for polygon-features? Thanks a lot for answering. I'm using ArcGIS 10.3. Hallo, ich möchte die Größe pro Flächeneinheit auf Basis von Polygon-Features, die sich innerhalb einer bestimmten Nachbarschaft um eine einzelne Zelle befinden berechnen. Gibt es ein Tool ähnlich wie Point Density auch für Polygone? Vielen Dank für eine Antwort im Voraus. Jutta
... View more
01-19-2016
07:51 AM
|
0
|
10
|
6474
|
POST
|
Riyas Deen Ok, I did a very stupid mistake: If I only select 4 lines it is not possible to get a combi with more than one number, because there is no combination. Doing the selection for more lines it works perfectly. I’m so sorry. And thanks a lot for your help!
... View more
10-08-2014
05:59 AM
|
0
|
0
|
631
|
POST
|
Riyas Deen I’m so sorry but it is still the same problem but thanks for your help. This is what I was running: import arcpy import os, sys from arcpy import env env.overwriteOutput = True env.workspace = r"D:\Users\julia\erste_aufg\cities_UA" env.qualifiedFieldNames = False resulttxt = env.workspace + "\\" + "outFINAL.dbf"# Set your output fc_tables = arcpy.ListTables("*sort_sorted3.dbf") # Set your filter here tableViews = [] for tableR in fc_tables: #tableresultName = os.path.splitext (tableR) [0] arcpy.AddMessage(tableR) tableView = tableR + "_View" arcpy.MakeTableView_management(tableR, tableView) arcpy.SelectLayerByAttribute_management(tableView, "NEW_SELECTION",""""OID" < 4""") # Set your where clause here tableViews.append(tableView) arcpy.AddMessage(len(tableViews)) arcpy.Merge_management(tableViews, resulttxt)
... View more
10-08-2014
04:57 AM
|
0
|
2
|
631
|
POST
|
Riyas Deen Hi Riyas Deen, Thank you for your answer. It tried it but the huge problem is that the combi is still getting lost. To test is I just copied the selection of one table to the new table. I’m really desperate because I don’t know the way to select the first lines and fill them to one big table without losing the combi. And the Combi-field is now a string why is the combi lost? fc_tables = arcpy.ListTables("*sort_sort*") for tableR in fc_tables: tableresultName = os.path.splitext (tableR) [0] print tableresultName arcpy.MakeTableView_management(env.workspace+ "\\" +tableR, "tableR_ly") arcpy.SelectLayerByAttribute_management("tableR_ly", "NEW_SELECTION",""""OID" < 4""") tablename = tableresultName + "selc.dbf" arcpy.CreateTable_management (env.workspace, tablename, tableR) arcpy.CopyRows_management("tableR_ly", tablename)
... View more
10-08-2014
02:54 AM
|
0
|
4
|
631
|
POST
|
I have around 30 tables like this: Now I have to get one big table out of them, but only for the first four lines of each table. First I tried it with this code:
resulttxt = newpath + "\\" + "resultSUM.txt"
f = open (resulttxt, 'w')
f.write ("Combi,FREQUENCY,SUM_PERCEN,SUM_SUM_ar,NAME,CODE,\n")
f.close ()
f = open (resulttxt, 'a')
fc_tables = arcpy.ListTables("*sort_sort*")
for tableR in fc_tables:
fieldsSe= ["Combi","FREQUENCY" ,"SUM_PERCEN" ,"SUM_SUM_ar", "NAME", "CODE"]
with arcpy.da.SearchCursor(tableR, fieldsSe, """"OID" < 4""") as sCursor:
for row in sCursor:
print row [0], row [1],row [2],row [3],row [4],row [5]
f.write(str(row[0]) + "," + str(row[1])+ "," +str(row[2]) + "," + str(row[3])+ "," +str(row[4])+ "," +str(row[5]) + "\n")
f.close()
The problem with this code is that the field “Combi” of the “resulttxt” has the type “double” therefore it does not fill the combinations in the right way, because the combinations are originally in a “text “-field (see the first table). This is the result from the code with open(): Because of this I tried the tool "merge" with the following code:
import arcpy
import os, sys
from arcpy import env
env.overwriteOutput = True
env.workspace = r"D:\Users\julia\erste_aufg\cities_UA"
env.qualifiedFieldNames = False
resulttxt = r"D:\Users\julia\erste_aufg\cities_UA\resultfolder"+ "\\" +"4resultSUM.txt"
fc_tables = arcpy.ListTables("*sort_sort*")
for tableR in fc_tables:
tableresultName = os.path.splitext (tableR) [0]
print tableresultName
arcpy.Merge_management(fc_tables, resulttxt)
The new table looks like this: This is exactly what I need, but how can I do this for the selection of just the first four lines? Now it gives me all (10) back. Does someone has an idea?
... View more
10-07-2014
05:56 AM
|
0
|
6
|
1860
|
POST
|
with arcpy.Merge_management I'm able to create the big table.
... View more
10-07-2014
04:59 AM
|
0
|
0
|
225
|
POST
|
Richard Fairhurst I mean with the faster way just that I guess my code is not perfect, I’m totally fine with your solution and you helped me a lot with that.
... View more
10-07-2014
04:20 AM
|
0
|
0
|
225
|
POST
|
Richard Fairhurst Ok, thank you. The problem was that I should create the combination not just for the field "GRIDCODE" because now there a tow more codes and I need also their combinations ("GRIDCODE2", "GRIDCODE3") . But this is not the only thing I have to do: There is the field "area" and I have to get the sum from this field for the combination and also the percent of the area for one combination. I did this basically with arcpy.Statistics_analysis and arcpy.AddJoin_management. The code is very long at the moment... The big goal is one table (txt or csv) with the first 40 rows from each combination sorted downward including the filename in a column, the combinationname, the percent, area and the frequency (like the table below). I will try to write the output from all tables which are in the list among each other and because of the created codenamefield and the field with the name from the original shp it is comprehensible. The huge problem is that this is the first big thing I'm doing with arcgis and arcpy. So I guess the code is not the fasted way to the goal, but it is nearly working... The other thing making my process a little bit hard is that the purpose for my result is chancing respectively is still in discussion. My actual problem is to get the three result tables for one round in the loop to a big one without losing the combinations. Here is one of the result-tables which should get to the big goal: At the moment the txt produces a combi-field with the type double but I need a text-field otherwise the combination got lost like here: So to go back to the opening question I copied your code three times and therefore I got the three combinations. So the question is a kind of answered, I guess there is a faster way. At the moment I don’t know how to get my big table including the combinations. Should I ask this in a new post? import arcpy import os, sys from arcpy import env env.overwriteOutput = True env.workspace = r"D:\Users\julia\erste_aufg\cities_UA" env.qualifiedFieldNames = False fcResult = arcpy.ListFeatureClasses("*_result.shp") for shpresult in fcResult: shpresultName = os.path.splitext (shpresult) [0] print shpresultName outputfile = env.workspace + "\\" + shpresultName + "0_.txt" f = open (outputfile, 'w') f.write ("ID,Combi,\n") f.close () f = open (outputfile, 'a') valueDict = {} with arcpy.da.SearchCursor(shpresult, ["Id", "GRIDCODE"]) as searchRows: for searchRow in searchRows: keyValue = searchRow[0] gridcode = searchRow[1] if not keyValue in valueDict: valueDict[keyValue] = [gridcode] elif not gridcode in valueDict[keyValue]: valueDict[keyValue].append(gridcode) # sort both the ID value keys and the gridcodes which are converted to a string list sortedDict = {} for keyValue in sorted(valueDict.keys()): items = "" for item in sorted(valueDict[keyValue]): if items == "": items = str(item) else: items = items + ", " + str(item) sortedDict[keyValue] = items # write to text file with the code list enclosed inside double quotes. for keyValue in sortedDict: f.write(str(keyValue) + ',"' + sortedDict[keyValue] + '"\n' ) f.close() #get combi2 outputfile2 = env.workspace + "\\" + shpresultName + "2_.txt" f = open (outputfile2, 'w') f.write ("ID,Combi,\n") f.close () f = open (outputfile2, 'a') valueDict = {} with arcpy.da.SearchCursor(shpresult, ["Id", "GRIDCODE2"]) as searchRows: for searchRow in searchRows: keyValue = searchRow[0] gridcode = searchRow[1] if not keyValue in valueDict: valueDict[keyValue] = [gridcode] elif not gridcode in valueDict[keyValue]: valueDict[keyValue].append(gridcode) # sort both the ID value keys and the gridcodes which are converted to a string list sortedDict = {} for keyValue in sorted(valueDict.keys()): items = "" for item in sorted(valueDict[keyValue]): if items == "": items = str(item) else: items = items + ", " + str(item) sortedDict[keyValue] = items # write to text file with the code list enclosed inside double quotes. for keyValue in sortedDict: f.write(str(keyValue) + ',"' + sortedDict[keyValue] + '"\n' ) f.close() #get combi3 outputfile3 = env.workspace + "\\" + shpresultName + "3_.txt" f = open (outputfile3, 'w') f.write ("ID,Combi,\n") f.close () f = open (outputfile3, 'a') valueDict = {} with arcpy.da.SearchCursor(shpresult, ["Id", "GRIDCODE3"]) as searchRows: for searchRow in searchRows: keyValue = searchRow[0] gridcode = searchRow[1] if not keyValue in valueDict: valueDict[keyValue] = [gridcode] elif not gridcode in valueDict[keyValue]: valueDict[keyValue].append(gridcode) # sort both the ID value keys and the gridcodes which are converted to a string list sortedDict = {} for keyValue in sorted(valueDict.keys()): items = "" for item in sorted(valueDict[keyValue]): if items == "": items = str(item) else: items = items + ", " + str(item) sortedDict[keyValue] = items # write to text file with the code list enclosed inside double quotes. for keyValue in sortedDict: f.write(str(keyValue) + ',"' + sortedDict[keyValue] + '"\n' ) f.close() #Execute TableToDBASE 3times arcpy.TableToDBASE_conversion(outputfile, env.workspace) arcpy.MakeTableView_management (outputfile, "outputfile_ly") arcpy.TableToDBASE_conversion(outputfile2, env.workspace) arcpy.MakeTableView_management (outputfile2, "outputfile2_ly") arcpy.TableToDBASE_conversion(outputfile3, env.workspace) arcpy.MakeTableView_management (outputfile3, "outputfile3_ly") # get the percent of the sum area for each combi outtable= shpresultName +"_outtable.dbf" arcpy.Statistics_analysis(shpresult, outtable, [["area", "SUM"]], "Id") outtableSUM= shpresultName + "_outtableSUM.dbf" arcpy.Statistics_analysis(outtable, outtableSUM, [["SUM_area", "SUM"]]) with arcpy.da.SearchCursor(outtableSUM, "SUM_SUM_ar") as cursor: for row in cursor: print (int(row[0])) fieldsum = (int(row[0])) print fieldsum arcpy.AddField_management(outtable, "PERCENT", "DOUBLE") arcpy.CalculateField_management(outtable, "PERCENT", "([SUM_area]*100)/{}".format(fieldsum) , "VB") arcpy.MakeTableView_management(outtable, "outtable_ly") #add Join arcpy.AddJoin_management ("outtable_ly", "Id", "outputfile_ly", "ID","KEEP_ALL") outJoin= shpresultName +"_outJoin.dbf" arcpy.CopyRows_management ("outtable_ly", outJoin) # sum percent for unique combi joinSum = shpresultName + "_joinSum.dbf" arcpy.Statistics_analysis (outJoin, joinSum, [["PERCENT", "SUM"],["SUM_area", "SUM"]], "Combi") inputtable= shpresultName +"_joinSum.dbf" arcpy.MakeTableView_management (inputtable, "inputtable_ly") outsort = shpresultName + "sort_sorted.dbf" arcpy.Sort_management("inputtable_ly", outsort, [["SUM_PERCEN", "DESCENDING"]]) # new fields and create the name+codename arcpy.AddField_management(outsort, "NAME", "TEXT") arcpy.AddField_management(outsort, "CODE", "TEXT") fields = ["NAME", "CODE"] with arcpy.da.UpdateCursor(outsort, fields) as cursor: for row in cursor: row[0] = shpresultName row[1] = "CODE1" cursor.updateRow(row) # get the percent: Combi2 outtable= shpresultName +"_outtable.dbf" arcpy.Statistics_analysis(shpresult, outtable, [["area", "SUM"]], "Id") outtableSUM= shpresultName + "_outtableSUM.dbf" arcpy.Statistics_analysis(outtable, outtableSUM, [["SUM_area", "SUM"]]) with arcpy.da.SearchCursor(outtableSUM, "SUM_SUM_ar") as cursor: for row in cursor: print (int(row[0])) fieldsum = (int(row[0])) print fieldsum arcpy.AddField_management(outtable, "PERCENT", "DOUBLE") arcpy.CalculateField_management(outtable, "PERCENT", "([SUM_area]*100)/{}".format(fieldsum) , "VB") arcpy.MakeTableView_management(outtable, "outtable_ly") #add Join arcpy.AddJoin_management ("outtable_ly", "Id", "outputfile2_ly", "ID","KEEP_ALL") outJoin2= shpresultName +"_outJoin2.dbf" arcpy.CopyRows_management ("outtable_ly", outJoin2) # sum percent for unique combi joinSum2 = shpresultName + "_joinSum2.dbf" arcpy.Statistics_analysis (outJoin2, joinSum2, [["PERCENT", "SUM"],["SUM_area", "SUM"]], "Combi") inputtable2= shpresultName +"_joinSum2.dbf" arcpy.MakeTableView_management (inputtable2, "inputtable2_ly") outsort2 = shpresultName + "sort_sorted2.dbf" arcpy.Sort_management("inputtable2_ly", outsort2, [["SUM_PERCEN", "DESCENDING"]]) # new fields and create the name+codename arcpy.AddField_management(outsort2, "NAME", "TEXT") arcpy.AddField_management(outsort2, "CODE", "TEXT") fields = ["NAME", "CODE"] with arcpy.da.UpdateCursor(outsort2, fields) as cursor: for row in cursor: row[0] = shpresultName row[1] = "CODE2" cursor.updateRow(row) # get the percent: Combi3 outtable= shpresultName +"_outtable.dbf" arcpy.Statistics_analysis(shpresult, outtable, [["area", "SUM"]], "Id") outtableSUM= shpresultName + "_outtableSUM.dbf" arcpy.Statistics_analysis(outtable, outtableSUM, [["SUM_area", "SUM"]]) with arcpy.da.SearchCursor(outtableSUM, "SUM_SUM_ar") as cursor: for row in cursor: print (int(row[0])) fieldsum = (int(row[0])) print fieldsum arcpy.AddField_management(outtable, "PERCENT", "DOUBLE") arcpy.CalculateField_management(outtable, "PERCENT", "([SUM_area]*100)/{}".format(fieldsum) , "VB") arcpy.MakeTableView_management(outtable, "outtable_ly") #add Join arcpy.AddJoin_management ("outtable_ly", "Id", "outputfile3_ly", "ID","KEEP_ALL") outJoin3= shpresultName +"_outJoin3.dbf" arcpy.CopyRows_management ("outtable_ly", outJoin3) # sum percent for unique combi joinSum3 = shpresultName + "_joinSum3.dbf" arcpy.Statistics_analysis (outJoin3, joinSum3, [["PERCENT", "SUM"],["SUM_area", "SUM"]], "Combi") inputtable3= shpresultName +"_joinSum3.dbf" arcpy.MakeTableView_management (inputtable3, "inputtable3_ly") outsort3 = shpresultName + "sort_sorted3.dbf" arcpy.Sort_management("inputtable3_ly", outsort3, [["SUM_PERCEN", "DESCENDING"]]) # new fields and create the name+codename arcpy.AddField_management(outsort3, "NAME", "TEXT") arcpy.AddField_management(outsort3, "CODE", "TEXT") fields = ["NAME", "CODE"] with arcpy.da.UpdateCursor(outsort3, fields) as cursor: for row in cursor: row[0] = shpresultName row[1] = "CODE3" cursor.updateRow(row) #new folder # Set local variables out_folder_path = env.workspace out_name = "resultfolder" # Execute CreateFolder arcpy.CreateFolder_management(out_folder_path, out_name) newpath= env.workspace + "\\" + out_name resulttxt = newpath + "\\" + "resultSUM.txt" f = open (resulttxt, 'w') f.write ("Combi,FREQUENCY,SUM_PERCEN,SUM_SUM_ar,NAME,CODE,\n") f.close () f = open (resulttxt, 'a') fc_tables = arcpy.ListTables("*sort_sort*") for tableR in fc_tables: fieldsSe= ["Combi","FREQUENCY" ,"SUM_PERCEN" ,"SUM_SUM_ar", "NAME", "CODE"] with arcpy.da.SearchCursor(tableR, fieldsSe, """"OID" < 4""") as sCursor: for row in sCursor: print row [0], row [1],row [2],row [3],row [4],row [5] f.write(str(row[0]) + "," + str(row[1])+ "," +str(row[2]) + "," + str(row[3])+ "," +str(row[4])+ "," +str(row[5]) + "\n") f.close()
... View more
10-07-2014
03:50 AM
|
0
|
2
|
225
|
Title | Kudos | Posted |
---|---|---|
1 | 09-29-2016 02:12 AM | |
1 | 01-19-2016 08:34 AM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|