Error message even though script finishes with expected results.

4942
15
Jump to solution
06-06-2013 10:08 AM
AlexGray
New Contributor
I am working on a script trying to plot lines from csv point data file..
It all seems to work until the end where I get an error stating that the table is not found..

when I open the FC in the map view, I see that I am getting the result I was expecting..

any help would be greatly appreciated..

here is the error...
Traceback (most recent call last):   File "C:\temp\GIS_projects\Sample_tripData\test.py", line 27, in <module>     "TELATDD","GEODESIC","TRIPSEQNUM")   File "C:\Program Files (x86)\ArcGIS\Desktop10.1\arcpy\arcpy\management.py", line 2913, in XYToLine     raise e ExecuteError: ERROR 999999: Error executing function. The table was not found. [trip_lines] Failed to execute (XYToLine).



here is my code..
# Import system modules import arcpy print "importing modules and set environment...." from arcpy import env env.workspace = "c:/temp/gis_projects/sample_tripdata/test.gdb" fc = "trip_lines" #overwrite pre-existing files arcpy.env.overwriteOutput = True  # Set local variables print "setting data input and output variables..." input_table = "C:/temp/GIS_projects/Sample_tripData/A6_TEMP_ATL_TRIP_HAULx.csv"  # check to see if FC exists, if so, delete. if arcpy.Exists(fc):     print "trip_lines exists..."     print "deleting trip_lines..."     arcpy.Delete_management(fc)     print "trip_lines deleted..."  out_lines = "C:/temp/GIS_projects/Sample_tripData/test.gdb/trip_lines"  #XY To Line print "running XY to Line Geoprocesseing tool..." arcpy.XYToLine_management(input_table,out_lines,                          "TSLONGDD","TSLATDD","TELONGDD",                          "TELATDD","GEODESIC","TRIPSEQNUM") 
Tags (2)
0 Kudos
15 Replies
T__WayneWhitley
Frequent Contributor
Very well sounds good, you are on track.  That's all that matters - knowing what/where to troubleshoot is more than half the battle!

If I may suggest this, based on what you said you may want to insert an 'intermediate' geoprocessing step where you write to a new table for better input into your tool...a kind of 'clean' processing for your csv.  You could potentially do this with cursor processing, looping on your csv to writing to, say, a gdb table.  Couple that with a 'try-except' within the loop to write the lineno for faulty lines to check, writing all the 'good' lines to the table.  When you correct the 'bad' lines, you can append them.  ....or, if you know how to handle them in the code, just correct them all in the same process.  But if easier, it would probably greatly help to at least know the lines that need your attention.

Enjoy,
Wayne
0 Kudos
RhettZufelt
MVP Frequent Contributor
Wayne,

now I need to figure out how to get it to check for null values..




Could probably do a TableToTable_conversion on your input file.  the TableToTable allows an expression so you could only copy the rows without nulls/blanks to the new table (possibly in memory) and use that table for your input.

Just a thought that doesn't need an if/then statement for each iteration.

R_
0 Kudos
AlexGray
New Contributor
Could probably do a TableToTable_conversion on your input file.  the TableToTable allows an expression so you could only copy the rows without nulls/blanks to the new table (possibly in memory) and use that table for your input.

Just a thought that doesn't need an if/then statement for each iteration.

R_


yeah, that is what I've done.. but it would be nice to know where the nulls are or how many there are..
I'm working on trying to figure out how to run a cursor and check values..
but for now this works..

I first convert the csv file to a table within a gdb and then convert the gdb table to another table minus the nulls..
here is the code..
#convert csv to table
    table1 = "C:\\temp\\GIS_projects\\Sample_tripData\\TEST_HAUL_DATA.csv"
    gdb = "c:\\temp\\gis_projects\\sample_tripdata\\test.gdb"
    table2 = "testHaul"
    table3 = "cleantestHaul"
    fieldName = "HaulSeqNum"
    print "Converting CSV table to GDB table..."
    arcpy.TableToTable_conversion(table1, gdb, table2)
    expression = arcpy.AddFieldDelimiters(arcpy.env.workspace, "DLONGDD") + " IS NOT NULL"
    print "Excluding null values..."
    arcpy.TableToTable_conversion(table2, gdb, table3,expression)
    fc = "c:\\temp\\gis_projects\\sample_tripdata\\test.gdb\\haulLines"


Thanks..
0 Kudos
RhettZufelt
MVP Frequent Contributor
You could do something like this:

import arcpy
# Overwrite pre-existing files
arcpy.env.overwriteOutput = True

# Local variables:


test_csv = "D:\\baks\\dont_use\\test.csv"
in_memory = "in_memory"
temp_gdb = "D:\\baks\\dont_use\\temp.gdb"
testHaul = "in_memory\\testHaul"

# Process: Table to Table
arcpy.TableToTable_conversion(test_csv, in_memory, "testHaul")

arcpy.TableToTable_conversion(testHaul, temp_gdb, "testHaul_nonnull", "\"COMMENT\" IS NOT NULL AND \"COMMENT\" <> '' AND \"COMMENT\" <> ' '")

arcpy.TableToTable_conversion(testHaul, temp_gdb, "testHaul_nulls", "\"COMMENT\" IS NULL OR \"COMMENT\" = '' OR \"COMMENT\" = ' '")



which would put your temp table in memory, which is much faster (since the expressions doesn't appear to work in table2table with csv files....another bug??), then convert to FGDB table using the expression on the second table2table.

My data, COMMENT is the field I am testing for NULL.  Actually, my expression tests if NULL, or if equal to blank or if equal to single space (all not valid data).  So, when run, I get one table "testHaul_nonnull" with the all rows where COMMENT is not Null or blank.

The last TableToTable is basically a reverse of the expression of the one before it.  So, it will make a new table with all rows with COMMENT = null, blank, and single space as a new table with testHaul_nulls.


You could also iterate through and use a cursor, put expression on the cursor, etc., but I think that would take longer.  the above seems like a quick/easy way to get both the non nulls to a "clean" table you can use for input as well as a table with all the "errors" that were stripped out.

R_
0 Kudos
T__WayneWhitley
Frequent Contributor
Rhett's approach is clean, I like it...this is just additional info, a little late, but here it is--- basically, a None type obj is returned by python...

You can access a csv with an arcpy search cursor as shown below, possibly with the 'data access' search cursor (arcpy.da.searchcursor).  The below is a simple demo from 10.0 (da is new at 10.1).

This is my 'junk' csv file contents, set up just for this demo:
field0,field1,field2,field3
1254,text,1258,562
,,,
 ,   ,      , 
99.99,, 515, 621   
5424,mytext,2301.123,562


...and this is the simple code (entered from IDLE) showing cursor access (just printing to the screen):
>>> import arcpy
>>> arcpy.env.workspace = r'C:\Documents and Settings\whitley-wayne\Desktop'

>>> CSV = 'testCSV1.csv'

>>> fields = arcpy.ListFields(CSV)

>>> rows = arcpy.SearchCursor(CSV)

>>> lineTXT = ''

>>> for row in rows:
 for fieldObj in fields:
  lineTXT += '\'' + str(row.getValue(fieldObj.name))+ '\'\t'
 print lineTXT
 lineTXT = ''

 
'1254.0' 'text' '1258.0' '562' 
'None' 'None' 'None' 'None' 
'None' 'None' 'None' 'None' 
'99.99' 'None' '515.0' '621' 
'5424.0' 'mytext' '2301.123' '562' 
>>>  


Anyway, that was short and sweet - maybe that'll help in getting started with cursors...
And by the way, the analogous test for Null in Python simply use the keyword None (no quotes), i.e., in an 'if' statement:
if row.getValue(field) is None:
     # then do something, for example...
     myFlag = True



Enjoy,
Wayne
0 Kudos
KeithMcKinnon2
New Contributor II

I am getting the same error when running the XY to Line tool directly in ArcMap using a table in a file geodatabase.

0 Kudos