The PRC_multipliprocessing script, I think there are some iterations that are not necessary, or can be shortened using list comprehension.
desc = arcpy.ListFields(ddpGrid)
for field in desc:
if field.name == "Name":
column = field.name
idList = []
with arcpy.da.SearchCursor(ddpGrid, column) as cursor:
for row in cursor:
idName = row[0]
idList.append(idName)
jobs = []
for name in idList:
jobs.append((mxdPath,outFolder,name)) #ddpGrid, mxdFile, outFolder, name
arcpy.AddMessage("Job list has " + str(len(jobs)) + " elements.")
# print("Job list has " + str(len(jobs)) + " elements.")
is on the same level of indentation and it looks like getting the column name will always return the column named "Name"? Might as well skip that and just use "Name" in the searchcursor?
You can also use list comprehension to build your jobs array.
idList = [x[0] for x in arcpy.da.SearchCursor(ddpGrid, "Name")]
jobs = [(mxdPath,outFolder, x[0]) for x in idList] #ddpGrid, mxdFile, outFolder, name
arcpy.AddMessage("Job list has " + str(len(jobs)) + " elements.")
to get the results/ messages from the process, I used something like:
for mxd, result, status, message in res:
if result == False:
arcpy.AddError("{} {} with {}!".format(mxd, status, message))
else:
arcpy.AddMessage("{} succeeded!".format(mxd))
In your worker, I would suggest using one try, except, finally. The inner tries are not really doing anything. In your finally, del all the objects that you have open, even though the garbage collector would 'do' this, there is no telling if a failed process is staying open when there is an exception and the del is not hit. things could be staying open.
add the globals to the script:
status = 'Failed'
msg = ''
then at the end of your processes, add
id = mxdPath # or other id that you can use to tell which one it is
result = True
status = 'Success'
msg = ''
except arcpy.ExecuteError:
# Geoprocessor threw an error
msg = arcpy.GetMessages(2)
except Exception as e:
tb = sys.exc_info()[2]
msg = "Failed at Line {} Error: {}".format(tb.tb_lineno, e)
finally:
del pdfDoc, file1, file2, mxdPath, resSketch, mxdFile
return (id, result, status, message)
Last thought for now, with so many pdf's in the queue, maybe you should break them up into ranges so the cores don't get 75k processes all at once. Maybe use the count of jobs to set up a index range to work through the jobs list 25-50 at a time. How long does one take to export?
These snippets might need some work to fit into your script-