Skip navigation
All People > Dan_Patterson > Py... blog
1 2 3 4 5 Previous Next

Py... blog

75 posts

It is really a pain that certain highly used functions are only available an advanced license level.  This is an alternate to the options of using Excel to produce a pivot table from ArcMap tabular data.

 

Flipping rows and columns in data generally works smoothly when the table contains one data type, whether it be integer, float or text.  Problems arise when you add stuff to Excel is that it allows you do so without regard to the underlying data.  So, columns get mixed data types and rows do as well. Uniformity by column is the rule.

 

In NumPy, each column has a particular data type.  The data type controls the operations that can be performed on it.  Numeric fields can have all the number type operations used...similarly for string/text fields.  It is possible to cast a field as an "object" type allowing for mixed type entries.  The nice thing about this type, is that you can't really do anything with it unless it is recast into a more useful form...but it does serve as a conduit to other programs or just for presentation purposes.

 

In the following example, the line

a = # your array goes here

can be derived using

a = arcpy.FeatureClasstoNumPyArray(....)  FeatureClassToNumPyArray

 

The nature of np.unique change in version 1.9 to get the number of unique classes as well.  So if you are using ArcGIS Pro, then you can use the newer version if desired by simply changing line 04 below.

 

a_u, idx, counts = np.unique(a_s, return_inverse=True, unique_counts=True)

 

Array conversion to summary table or pivot tableInput and output

Well... who needs an advanced license or excel ...

Assume we have an array of the format shown in the Input section.  We can determine the counts or sums of unique values in a field, using the following.

  • sort the array on a field,
  • get unique values in that field,
  • sum using the values in another field as weights
  • rotate if desired
    import numpy as np
    a = # your array goes here
    a_s = a[np.argsort(a, order="Class")]
    a_u, idx = np.unique(a_s["Class"], return_inverse=True)
    bin_cnt = np.bincount(idx,weights=a_s['Count'])
    ans = np.array((a_u, bin_cnt), dtype='object')
    print("a_u\n{}\nidx {}\nanswer\n{}".format(a_u, idx, ans))
    rot90 = np.rot90(ans, k=1) 
    and_flipud = np.flipud(rot90) #np.flipud(np.rot90(a,k=1))))
    frmt = ("pivot table... rotate 90, flip up/down\n{}" 
    print(frmt.format(and_flipud))

 

The trick is to set the data type to 'object'. You just use FeatureClassToNumPyArray or TableToNumPyArray and their inverses to get to/from array format.  Ergo....pivot table should NOT be just for an advanced license

For all-ish combos, you can just add the desired lines to the above

 

for i in range(4):
    print("\nrotated {}\n{}".format(90*i, np.rot90(a, k=i)))
for i in range(4):
    f = "\nrotate by {} and flip up/down\n{}"
    print(f.format(90*i, np.flipud(np.rot90(a, k=i))))
for i in range(5):
    f = "\nrotate by {} and flip left/right\n{}"
    print(f.format(90*i, np.fliplr(np.rot90(a, k=i))))

Input table with the following fields

'ID', 'X', 'Y', 'Class', 'Count'

 

>>> input array...
[[(0, 6.0, 0.0, 'a', 10)]
[(1, 7.0, 9.0, 'c', 1)]
[(2, 8.0, 6.0, 'b', 2)]
[(3, 3.0, 2.0, 'a', 5)]
[(4, 6.0, 0.0, 'a', 5)]
[(5, 2.0, 5.0, 'b', 2)]
[(6, 3.0, 2.0, 'a', 10)]
[(7, 8.0, 6.0, 'b', 2)]
[(8, 7.0, 9.0, 'c', 1)]
[(9, 6.0, 0.0, 'a', 10)]]

>>> # the results

a_u  # unique values
['a' 'b' 'c']
idx [0 0 0 0 0 1 1 1 2 2]

answer # the sums
[['a' 'b' 'c']
[40.0 6.0 2.0]]

pivot table... rotate 90, flip up/down
[['a' 40.0]
['b' 6.0]
['c' 2.0]]


This post was inspired by the GeoNet thread... https://geonet.esri.com/thread/116880

 

I'm a student and I need a python script that i can use for ArcMap

 

I usually suggest that my students use Modelbuilder to build workflows, export to a python script, then modify the script for general use with the existing, or other, data sets.  I personally don't use Modelbuilder, but I have used one of two methods to generate the needed workflow .... Method 1 will be presented in this post...method 2 follows.

 

Method 1

 

Do it once...get the script...modify and reuse


Because of the imbedded images...please open the *.pdf file to view the complete discussion...

 

Regards

Dan

This is part 2 which follows up on my previous blog post.  In this example, the assignment restrictions have changed and one must now develop a script from what they have read about Python and the tools that are used in everyday ArcMap workflows.

 

Details are given in the attached pdf of the same name.


Regards
Dan

 

Homework... make this into a script tool for use in arctoolbox

This blog is inspired by this thread https://geonet.esri.com/docs/DOC-2436#comment-9031 by Steve Offermann (Esri).  He suggested a very simple way to extend the capabilities of tool results and how to parse arguments for them.  I recommended the use of the Results window outputs in a previous blog.  Hats off to Steve.

 

I am only going to scratch the surface by presenting a fairly simple script...which could easily be turned into a tool.

In this example, a simple shapefile of hexagons, (presented in another blog post) was processed to yield:

 

  • an extent file, giving the bounds of each of the input hexagons,
  • the hexagon corners as points and sent to a shapefile, and,
  • the centroids of each hexagon was treated in a similar fashion

 

The whole trick is to parse your processes down into parameters that can be shared amongst tools.  In this case, tools that can be categorized as:

 

  • one parameter tools like:  AddXY_management and CopyFeatures_management
  • two parameter tools like:
    • FeatureEnvelopeToPolygon_management,
    • FeatureToPoint_management and
    • FeatureVerticesToPoints_management

 

This can then be amended by, or supplemented with, information on the input/output shape geometries.  I demonstrate this by calculating the X,Y coordinates for the point files.

 

So you are saying ... I don't do that stuff ... well remember, I don't do that webby stuff either.   Everyone has a different workflow and if my students are reading this, just think how you could batch project a bunch of files whilst simultaneously renaming them etc etc.  The imagination is only limited by its owner...

 

First the output....

 

Hexagon_outputs.png

 

And now the script....

"""
Script:   run_tools_demo.py
Author:   Dan.Patterson@carleton.ca

Source:   Stefan Offermann
Thread:   https://geonet.esri.com/docs/DOC-2436

Purpose:  Results window on steroids
  - take a polygon shapefile, determine its envelop,
  - convert the feature to centroids,
  - convert to feature points
  - calculate X,Y for all of the above
  - then make a back of everything
Requires:
  - a source file
  - an output folder
  - a list of tools to run
"""
import os
import arcpy
arcpy.env.overwriteOutput = True
in_FC = "c:/!BlogPosts/Runtools_Demo/Shapefiles/pointy_hex.shp"
path,in_File = os.path.split(in_FC)
path += "/"
backup = "c:/temp/shapefiles/"    # some output folder
# file endings
end = ["_env","_fp","_vert"]      # envelop, feature to point, feature vertices
# two argument tools
two_arg = ["FeatureEnvelopeToPolygon_management",
           "FeatureToPoint_management",
           "FeatureVerticesToPoints_management"
          ]
# one argument tools
one_arg =["AddXY_management", "CopyFeatures_management"]
#
outputs = [in_FC.replace(".shp", end[i]+".shp") for i in range(len(end))]
backups =  [outputs[i].replace(path, backup) for i in range(len(end))]
#
polygons = []
points = []
for i in range(len(two_arg)):                  # run the two argument tools
    args = [in_FC, outputs[i]]                 # select the output file
    result = getattr(arcpy, two_arg[i])(*args) # run the tool...and pray
    frmt = '\Processing tool: {}\n  Input: {}\n  Output: {}'
    print(frmt.format(tools[i], args[0], args[1]))
#
for i in range(len(outputs)):
    args = [outputs[i]]
    print(outputs[i], arcpy.Describe(outputs[i]).shapeType)
    if arcpy.Describe(outputs[i]).shapeType == 'Point':
        result = getattr(arcpy, one_arg[0])(*args) # calculate XY
    result_bak = getattr(arcpy, one_arg[1])(result, backups[i]) # backup
    print('Calculate XY for: {}'.format(result))
    print('Create Backups: {}\n  Output: {}'.format(result,result_bak))

Enjoy and experiment with your workflows.

UPDATE:   take the poll first before you read on How do you write Python path strings?

 

I am sure everyone is sick of hearing ... check your filenames and paths and make sure there is no X or Y.  Well, this is going to be a work in progress which demonstrates where things go wrong while maintaining the identity of the guilty.

Think about it
>>> import arcpy
>>> aoi = "f:\test\a"
>>> arcpy.env.workspace = aoi
>>> print(arcpy.env.workspace)
f: est
>>>

 

>>> print(os.path.abspath(arcpy.env.workspace))
F:\ est
>>> print(os.path.exists(arcpy.env.workspace))
False
>>> print(arcpy.Exists(arcpy.env.workspace))
False
>>>
>>> print("{!r:}".format(arcpy.env.workspace))
'f:\test\x07'
>>>

 

>>> os.listdir(aoi)
Traceback (most recent call last):
  File "<interactive input>", line 1, in <module>
OSError: [WinError 123] The filename, directory name,
or volume label syntax is incorrect: 'f:\test\x07'
>>>

 

>>> arcpy.ListWorkspaces("*","Folder")
>>>
>>> "!r:{}".format(arcpy.ListWorkspaces("*","Folder"))
'!r:None'
>>>

 

 

Examples... Rules broken and potential fixes

Total garbage... as well as way too long.  Time to buy an extra drive.

>>> x ="c:\somepath\aSubfolder\very_long\no_good\nix\this"
>>> print(x)                  # str notation
c:\somepath Subfolder ery_long
o_good
ix his
>>> print("{!r:}".format(x))  # repr notation
'c:\\somepath\x07Subfolder\x0bery_long\no_good\nix\this'
>>>
  • No r in front of the path.
  • \a \b \n \t \v are all escape characters... check the result
  • Notice the difference between plain str and repr notation

--------------------------------------------------------------------------------------------------------------------------

Solution 1... raw format

>>> x = r"c:\somepath\aSubfolder\very_long\no_good\nix\this"

>>> print(x)                  # str notation
c:\somepath\aSubfolder\very_long\no_good\nix\this

>>> print("{!r:}".format(x))  # repr notation
'c:\\somepath\\aSubfolder\\very_long\\no_good\\nix\\this'
>>>
  • Use raw formatting, the little r in front goes a long way.

--------------------------------------------------------------------------------------------------------------------------

Solution 2... double backslashes

>>> x ="c:\\somepath\\aSubfolder\\very_long\\no_good\\nix\\this"
>>> print(x)                  # str notation
c:\somepath\aSubfolder\very_long\no_good\nix\this

>>> print("{!r:}".format(x))  # repr notation
'c:\\somepath\\aSubfolder\\very_long\\no_good\\nix\\this'
>>>
  • Yes! I cleverly used raw formatting and everything should be fine but notice the difference between str and repr.

--------------------------------------------------------------------------------------------------------------------------

Solution 3... forward slashes

>>> x ="c:/somepath/aSubfolder/very_long/no_good/nix/this"
>>> print(x)                  # str notation
c:/somepath/aSubfolder/very_long/no_good/nix/this
>>> print("{!r:}".format(x))  # repr notation
'c:/somepath/aSubfolder/very_long/no_good/nix/this'
>>>

 

--------------------------------------------------------------------------------------------------------------------------

Solution 4... os.path functions

There are very useful functions and properties in os.path.  The reader is recommended to examine the contents after importing the os module (ie dir(os.path)  and help(os.path)

 

>>> x = r"F:\Writing_Projects\Before_I_Forget\Scripts\timeit_examples.py"
>>> base_name = os.path.basename(x)
>>> dir_name = os.path.dirname(x)
>>> os.path.split(joined)  # see splitdrive, splitext, splitunc
('F:\\Writing_Projects\\Before_I_Forget\\Scripts', 'timeit_examples.py')
>>> joined = os.path.join(dir_name,base_name)
>>> joined
'F:\\Writing_Projects\\Before_I_Forget\\Scripts\\timeit_examples.py'
>>>
>>> os.path.exists(joined)
True
>>> os.path.isdir(dir_name)
True
>>> os.path.isdir(joined)
False
>>>

ad nauseum

 

--------------------------------------------------------------------------------------------------------------------------

Gotcha's

Fixes often suggest the following ... what can go wrong, if you failed to check.

(1)

>>> x = "c:\somepath\aSubfolder\very_long\no_good\nix\this"
>>> new_folder = x.replace("\\","/")
>>> print(x)                  # str notation
c:\somepath Subfolder ery_long
o_good
ix his
>>> print("{!r:}".format(x))  # repr notation
'c:\\somepath\x07Subfolder\x0bery_long\no_good\nix\this'
>>>

 

(2)

>>> x = r"c:\new_project\aSubfolder\"
  File "<string>", line 1
    x = r"c:\new_project\aSubfolder\"
                                    ^
SyntaxError: EOL while scanning string literal

 

(3)

>>> x = "c:\new_project\New_Data"
>>> y = "new_grid"
>>> out = x + "\\" + y
>>> print(out)
c:
ew_project\New_Data\new_grid

 

(4)

>>> x = r"c:\new_project\New_Data"
>>> z = "\new_grid"
>>> out = x + z
>>> print(out)
c:\new_project\New_Data
ew_grid

 

(5)  This isn't going to happen again!

>>> x = r"c:\new_project\New_Data"
>>> z = r"\new_grid"
>>> out = x + y
>>> print(out)
c:\new_project\New_Datanew_grid

 

(6)  Last try

>>> x = r"c:\new_project\New_Data"
>>> z = r"new_grid"
>>> please = x + "\\" + z
>>> print(please)
c:\new_project\New_Data\new_grid

 

Well this isn't good!   Lesson?  Get it right the first time. Remember the next time someone says...

Have you checked your file paths...?????   Remember these examples.

 

Curtis pointed out this helpful link...I will include it here as well

Paths explained: Absolute, relative, UNC, and URL—Help | ArcGIS for Desktop

 

That's all for now.

I will deal with spaces in filenames in an update.  I am not even to go to UNC paths.

Code formatting tips

 

Updated - 2017/06/27  Added another reference and some editing.

 

This topic has been covered by others as well...

 

We all agree the Geonet code editor is horrible... but it has been updated.

Here are some other tips.

 

To begin... introduction or review

  • don't try to post code while you are responding to a thread in your inbox
  • access the More button, from the main thread's title... to do this:
    • click on the main thread's title
    • now you should see it... you may proceed

 

Step 1... select ... More ... then ... Syntax highlighter
  • Go to the question and press Reply ...
  • Select the Advanced editor if needed (or ...),  then select

If you can't see it, you didn't select it

 

 More...Syntax highlighter ,

 

Your code can now be pasted in and highlighted with the language of your choice .........

Your code should be highlighted something like this ............

--- Python -----------------------------

import numpy as np
a = np.arange(5)
print("Sample results:... {}".format(a))

--------------------------------------------

Now the above example gives you the language syntax highlighting that you are familiar with..

Alternatives include just using the HTML/XML option

-----HTML/XML ---------------------

# just use the HTML/XML option.. syntax colors will be removed
import numpy as np
a = np.arange(5)
print("simple format style {}".format(a))
simple format style [0 1 2 3 4]

--------------------------------------------

 

NOTE:   you can only edit code within this box and not from outside!

 

 

 

Script editing tipscont'd

HTML editing tips:....

  • You can get into the html editor for fine-tuning, but it can get ugly for the uninitiated.
  • Comments get deleted ... this not a full editor under your control
  • If you have lots of text to enter, do it first then enter and format code
  • If editing refresh is slow, use the HTML editor or you will have retired before it completes.
  • The editor seems to edit each character as you type and it gets painfully slower as the post gets bigger.
  • You can improve comments and code output by using tables like this and in the example below.

 

Here is a simple script with code and results published in columns (2 columns * 1 row).  If the contents are wider than the screen, the scroll-bar will be located at the end of the document rather than attached to each table (except for iThingys, then just use swipe).

 

Sample script using a plain format... 1920x1080px screen sizeResult 2
>>> import numpy as np
>>> a = np.arange(5)
>>> print("Sample results:... {}".format(a))
>>> # really long comment 30 |------40 | -----50 | -----60 | -----70 | ---- 80|
Sample results:... [0 1 2 3 4]
>>> # really long comment 30 |------40 | -----50 | -----60 | -----70 | ---- 80|

 

Leave space after a table so you can continue editing after the initial code insertion.

It is often hard to select the whitespace before or after a table and you may need to go to the html editor < > just above the More toggle

 

Larger script sample...

Before code tip:  try to keep your line length <70 characters

# -*- coding: UTF-8 -*-
"""
:Script:   demo.py
:Author:   Dan.Patterson@carleton.ca
:Modified: 2016-08-14
:Purpose:  none
:Functions:  help(<function name>) for help
:----------------------------
: _demo  -  This function ...
:Notes:
:References
:
"""

#---- imports, formats, constants ----

import sys
import numpy as np
from textwrap import dedent

ft = {'bool':lambda x: repr(x.astype('int32')),
      'float': '{: 0.3f}'.format}
np.set_printoptions(edgeitems=10, linewidth=80, precision=2, suppress=True,
                    threshold=100, formatter=ft)
script = sys.argv[0]

#---- functions ----

def _demo():
    """  
    :Requires:
    :--------
    :
    :Returns:
    :-------
    :
    """

    return None

#----------------------
if __name__ == "__main__":
    """Main section...   """
    #print("Script... {}".format(script))
    _demo()

 

Some space for editing after should be left since positioning the cursor is difficult after the fact.

 

Output options

 

  • You can paste text and graphics with a table column.
  • You can format a column to a maximum pixel size.

 

Sample output with a graph

Option 0: 1000 points
[[ 2. 2.]
[ 3. 3.]] extents
.... snip
Time results: 1.280e-05 s, for 1000 repeats

point_in_polygon.png

point_in_polygon.png

 

So there has been some improvement. 

Again...

You just have to remember that to edit code...

you have to go back to the syntax highlighter.

You can't edit directly on the page.

Reclassifying raster data can be a bit of a challenge, particularly if there are nodata values in the raster.  This is a simple example of how to perform classifications using a sample array. (background 6,000+ rasters the follow-up  )

 

An array will be used since it is simple to bring in raster data to numpy using arcpy's:

  •   RasterToNumPyArray
    •   RasterToNumPyArray (in_raster, {lower_left_corner}, {ncols}, {nrows}, {nodata_to_value})

and sending the result back out using:

  • NumPyArrayToRaster
    • NumPyArrayToRaster (in_array, {lower_left_corner}, {x_cell_size}, {y_cell_size}, {value_to_nodata})

On with the demo...

 

Raster with full data

old                                new

[ 0  1  2  3  4  5  6  7  8  9]    [1 1 1 1 1 2 2 2 2 2]

[10 11 12 13 14 15 16 17 18 19]    [3 3 3 3 3 4 4 4 4 4]

[20 21 22 23 24 25 26 27 28 29]    [5 5 5 5 5 6 6 6 6 6]

[30 31 32 33 34 35 36 37 38 39]    [7 7 7 7 7 7 7 7 7 7]

[40 41 42 43 44 45 46 47 48 49]    [7 7 7 7 7 7 7 7 7 7]

[50 51 52 53 54 55 56 57 58 59]    [7 7 7 7 7 7 7 7 7 7]

# basic structure
a = np.arange(60).reshape((6, 10))
a_rc = np.zeros_like(a)
bins = [0, 5, 10, 15, 20, 25, 30, 60, 100]
new_bins = [1, 2, 3, 4, 5, 6, 7, 8]
new_classes = zip(bins[:-1], bins[1:], new_bins)
for rc in new_classes:
    q1 = (a >= rc[0])
    q2 = (a < rc[1])
    z = np.where(q1 & q2, rc[2], 0)
    a_rc = a_rc + z
return a_rc
# result returned

 

Lines 2, 3, 4 and 5 describe the array/raster, the classes that are to be used in reclassifying the raster and the new classes to assign to each class.  Line 5 simply zips the bins and new_bins into a new_classes arrangement which will subsequently be used to query the array, locate the appropriate values and perform the assignment (lines 6-10 )

 

Line 3 is simply the array that the results will be placed.  The np.zeros_like function essentially creates an array with the same structure and data type as the input array.  There are other options that could be used to create containment or result arrays, but reclassification is going to be a simple addition process...

 

  • locate the old classes
  • reclass those cells to a new value
  • add the results to the containment raster/array

 

Simple but effective... just ensure that your new classes are inclusive by adding one class value outside the possible range of the data.

 

Line 10 contains the np.where statement which cleverly allows you to put in a query and assign an output value where the condition is met and where it is not met.  You could be foolish and try to build the big wonking query that handles everything in one line... but you would soon forget when you revisit the resultant the next day.  So to alleviate this possibility, the little tiny for loop does the reclassification one grouping at a time and adds the resultant to the destination array.  When the process is complete, the final array is returned.

 

Now on to the arrays/rasters that have nodata values.  The assignment of nodata values is handled by RasterToNumPyArray so you should be aware of what is assigned to it.

 

Raster with nodata values

old                                  new

[--  1  2  3  4  5  6 --  8  9]      [--  1  1  1  1  2  2 --  2  2]

[10 11 12 13 -- 15 16 17 18 19]      [ 3  3  3  3 --  4  4  4  4  4]

[20 -- 22 23 24 25 26 27 -- 29]      [ 5 --  5  5  5  6  6  6 --  6]

[30 31 32 33 34 -- 36 37 38 39]      [ 7  7  7  7  7 --  7  7  7  7]

[40 41 -- 43 44 45 46 47 48 --]      [ 7  7 --  7  7  7  7  7  7 --]

[50 51 52 53 54 55 -- 57 58 59]]     [ 7  7  7  7  7  7 --  7  7  7]

Make a mask (aka ... nodata values) where the numbers are divisible by 7 and the remainder is 0.

Perform the reclassification using the previous conditions.

 

# mask the values
a_mask = np.ma.masked_where(a%7==0, a)
a_mask.set_fill_value(-1)
# set the nodata value

 

The attached sample script prints out the test with the following information:

 

Input array ... type ndarray

...snip...

Reclassification using

:  from [0, 5, 10, 15, 20, 25, 30, 60, 100]

:  to   [1, 2, 3, 4, 5, 6, 7, 8]

:  mask is False value is None

Reclassed array

...snip...

 

Input array ... type MaskedArray

...snip

Reclassification using

:  from [0, 5, 10, 15, 20, 25, 30, 60, 100]

:  to   [1, 2, 3, 4, 5, 6, 7, 8]

:  mask is True value is -1

Reclassed array

...snip....

-------------------------------------

That's about all for now.  Check the documentation on masked arrays and their functions.  Most functions and properties that apply to ndarrays also apply to masked arrays... it's like learning a new language just by altering the pronounciation of what you already know.

Anaconda ... python ... spyder ... conda .... arcgis pro

Ok... change is inevitable... at least we still have IDEs, having harkened from the card reader days.

So this is a running visual of installing Spyder so I can use with ArcGIS PRO and for general python 3.4/5/6 work

I have used pythonwin and pyscripter for some time.

Some people are charmed by pycharm and so people can't wait to be IDLE. ... but for now, I will document Spyder.

I will add to this as I find more..

 

Updates:

  2017-02-18 Anaconda 4.3 is out  

see the full changelog for details...  some highlights...

- Anaconda Navigator upgraded from 1.4.3 to 1.5.0  (see below...)

- MatPlotLib 2.0 is now installed by default,

- Jupyter notebook extensions not installed by default, but can be installed on their own

 

  2016-10-20 Conda and ArcGIS Pro | ArcPy Café 

 

Things just got a whole load easier....

 

Current distribution supports up to and including python 3.6, and a nice new Spyder plus many more upgraded packages.  I am using this for future-proofing my work.  Arc* will eventually be there so you may as well test while you can.

 

-------------------------------------------------------------------------------------------------------------------------------------------------

I will leave the material below as a legacy record since much of it still applies

The original link on ArcGIS Pro and the changes to managing python environments can be found here

.... Python and ArcGIS Pro 1.3 : Conda

 

Related python information can also be found in

.....   The ...py... links

        Coming to Python.... preparation and anticipation

        Python 3.5 comes to iThings

 

Additions and modificationsDocumentation

-

-  2016-07-15  importing xlsxwriter

-  2016-07-15  initial post

  1. Anaconda | Continuum Analytics: Documentation
  2. Get started — Conda documentation
  3. Anaconda package list
  4. Excel plug-ins for Anaconda |
  5. Spyder Documentation

 

:------------------------------------------------------------------------------------------------------------------------:

State the obvious..... Install ArcGIS PRO 1.3 (or above for future-proofing this blog)

Just follow the instructions.  Don't try and monkey-patch an old machine that barely runs ArcMap.  Start fresh.

 

1.  Setting your default editor

I have never used Arc*'s built in IDE.  I am not sure why they include it, except for the fact that they can control its installation and no one needs to worry about problems associated with a separate  IDE.  If you want to use another one, go to the Project pane, the Geoprocessing  Options and do some setup.  Spyder is located in a somewhat cryptic folder path, which I have show in navigation mode and in step 2. as a visual with cutouts for the fiddle bits.

 

spyder_01.png

 

:------------------------------------------------------------------------------------------------------------------------:

2. The file path to locate the executable

spyder_02.png

:------------------------------------------------------------------------------------------------------------------------:

3. The site-package folder

 

What is included without by default from an ArcGIS Pro installation... this is not a complete list of available packages... the list of those, is given above in the table.  The packages come by python version.  We are currently using python 3.4.x in ArcGIS PRO and ArcMap

spyder_03.png

 

:------------------------------------------------------------------------------------------------------------------------:

4. The Spyder folder contents

 

What is in the spyder folder... scientific_startup does some standard imports for what-it-says-type work

 

spyder_04.png

:------------------------------------------------------------------------------------------------------------------------:

5. The pkgs folder

 

A record of the packages that were installed.

 

spyder_05.png

 

:------------------------------------------------------------------------------------------------------------------------:

6.  Importing packages... xlsxwriter demo

 

Here is the workflow I used to import the xlsxwriter module for use in python and arcmap (presumably).

Here is the workflow I used to import the xlsxwriter module for use in python and arcmap (presumably).

From the start button (windows 10, bottom left) navigate to the ArcGIS folder via All Apps find the Python Command Prompt and right-click on it and Run as Administrator

xlsxwriter1.png

Do the conda install xlswriter entry as suggested in the originating post.

xlsxwriter2.png

Hit Enter and away you go. The magic happens and it should be installed.

xlsxwriter3.png

 

At this stage, I went back to Spyder and from the IPython console I tested... looks good

 

xlsxwriter4.png

:------------------------------------------------------------------------------------------------------------------------:

More as I find it...

Sliding/Moving windows ...

 

This is the companion to block functions introduced earlier.  I will keep it simple.  The stats functions for rasters with and without nodata values still apply to this type of treatment.  The only difference is how the sub-arrays are generated.  I won't get into the details of how they are done here, but the stride function uses some pretty tricky handling of the input array structure to parse out the moving windows.

 

Remember, block functions move one window size across and down, per movement cycle.  Sliding functions move one cell across and down and sample the cell based on the window size...hence, there are many more windows to sample in sliding function than in block functions.  I will just show the code snippets and you can play on your own.  I have posted sample timing results for comparative purposes for both implementations.

 

 

Code section and comments
  • The stride_tricks does the work.  You can examine the code in the ...site-packages, numpy lib folder.
  • functools provides decorator capabilities.  I tend to import it by default since I use decorators a lot for timing and documentation functions. 
  • The textwrap module provide several functions, with dedent being useful for output documentation. 
  • The numpy set_printoptions function allows you to control the appearance of output.  The version here is quite simplified...refer to the documentation for more information.

 

Stride and block functions provide the means to sample the input array according to the type of data, the size and layout.  You must read the documentation and experiment in order to fully understand.  It really messes with your head and makes the Rubic's cube look simple. The decorator and timer functions I have talked about before.  Refer to my previous blogs for more information.

imports

import numpy as np
from numpy.lib.stride_tricks import as_strided
from functools import wraps
from textwrap import dedent
np.set_printoptions(edgeitems=3,linewidth=80,precision=2,
                    suppress=True,threshold=100)

 

slide (stride) and block functions

#---- functions ----
def _check(a, r_c):
    """Performs the array checks necessary for stride and block.
    : a   - Array or list.
    : r_c - tuple/list/array of rows x cols. 
    :Attempts will be made to
    :     produce a shape at least (1*c).  For a scalar, the
    :     minimum shape will be (1*r) for 1D array or (1*c) for 2D
    :     array if r<c.  Be aware
    """

    if isinstance(r_c, (int, float)):
        r_c = (1, int(r_c))
    r, c = r_c
    a = np.atleast_2d(a)
    shp = a.shape
    r, c = r_c = ( min(r, a.shape[0]), min(c, shp[1]) )
    a = np.ascontiguousarray(a)
    return a, shp, r, c, tuple(r_c)
   
def stride(a, r_c=(3, 3)):
    """Provide a 2D sliding/moving view of an array. 
    :  There is no edge correction for outputs.
    :
    :Requires
    :--------
    : a - array or list, usually a 2D array.  Assumes rows is >=1,
    :     it is corrected as is the number of columns.
    : r_c - tuple/list/array of rows x cols.  Attempts  to
    :     produce a shape at least (1*c).  For a scalar, the
    :     minimum shape will be (1*r) for 1D array or 2D
    :     array if r<c.  Be aware
    """

    a, shp, r, c, r_c = _check(a, r_c)
    shape = (a.shape[0] - r + 1, a.shape[1] - c + 1) + r_c
    strides = a.strides * 2
    a_s = (as_strided(a, shape=shape, strides=strides)).squeeze()   
    return a_s


def block(a, r_c=(3, 3)):
    """See _check and/or stride for documentation.  This function
    :  moves in increments of the block size, rather than sliding
    :  by one row and column
    :
    """

    a, shp, r, c, r_c = _check(a, r_c)
    shape = (a.shape[0]/r, a.shape[1]/c) + r_c
    strides = (r*a.strides[0], c*a.strides[1]) + a.strides
    a_b = as_strided(a, shape=shape, strides=strides).squeeze()
    return a_b

 

timing decorator, decorated stat function and time test

def delta_time(func):
    """simple timing decorator function"""
    import time
    @wraps(func)
    def wrapper(*args, **kwargs):
        print("Timing function for... {}".format(func.__name__))
        t0 = time.perf_counter()        # start time
        result = func(*args, **kwargs)  # ... run the function ...
        t1 = time.perf_counter()        # end time
        print("Results for... {}".format(func.__name__))
        print("  time taken ...{:12.9e} sec.".format(t1-t0))
        #print("\n {}".format(result))  # print within wrapper
        return result                   # return result
    return wrapperdelta_time
def array_mean(a):
    """change the func"""
    a_mean = a.mean(axis=(2, 3))
    return None
def time_test():
    """time test for block and sliding raster"""
    N = [100, 500, 1000, 2000, 3000, 4000, 5000]
    for n in N:
        a = np.arange(n*n).reshape((n, n))
        #a0 = stride(a, block=(3, 3))  # uncomment this or below
        a0 = block(a, block=(3, 3))
        print("\nArray size... {0}x{0}".format(n))
        a_stat = array_mean(a0)       # time function
    return None

 

main section

if __name__=="__main__":
    """comparison between block and slide view"""
    prn=False
    time_test()

Description: This code does...timing  examples for sliding and block mean

 

Timing results

   sliding mean                block mean

array size   total time    total time (seconds)

1000x1000   3.85e-02      4.65...e-03

2000x2000   1.51e-01      1.75...e-02 

3000x3000   3.41e-01      3.86...e-02 

4000x4000   6.37e-01      7.20...e-02 

5000x5000   9.46e-01      1.04...e-01

 

The slowest is obviously the sliding function since it does 9 samples for ever 1 that block does since I used a 3x3 window...I think...it is the easy stuff that confuses.  Enjoy and study the code  and its origins should you be interested.  I should note that the whole area of sliding implementation has been quite the area of investigation.  The implementation that I presented here is one that I chose to use because I liked it.  It may not be the fastest or the bestest... but I don't care...if it takes less than a sip of coffee, it is good enough for me.

 

That's all for now....

Yes ... for a whole $7 Canadian, you too can do cutting edge programming on your iThing ... iPad 2 or above, the iPhone maybe even the iWatch. See ...  Pythonista for iOS

 

Well, if you don't have an iPhone, iPad or iWhatever to program on, you can keep up with conventional python innovations and update here:

 

Coming to Python.... preparation and anticipation

 

Oh... and os.walk alone, is so yesterday... os.scandir is the new kid providing speedups... but failing python 3.5, you can get it from github thankfully:

 

GitHub - benhoyt/scandir: Better directory iterator and faster os.walk(), now in the Python 3.5 stdlib

 

or just wait until 3.5 comes along.

 

Now ... the iStuff

I have mentioned the program Pythonista (available at the iStore) as a great program for doing programming.  I have been using the 2.7 version for about 2 years now and I am impressed with its IDE.  The new version runs both python 3.5 (yes... 3.5) and python 2.7.  It contains many, many modules by default, including numpy and MatPlotLib to name just a few.  You can ship your cool scripts off to github, OneDrive, all those cloud places.  You can even produce pdfs amongst other things.  A downside... some bizarre Apple policy  (no clue what it means) makes it just a tad hard to load a script directly... (but we all know how to copy and paste content don't we, or consult a 12 year old).

 

So, have a gander at your site-packages folder in your python distribution and just add more.  Do you like line numbering? code completion? indent/dedent? cool IDE themes (like the Dark Theme) and whole load of other stuff, then you might be interested.   SciPy and Pandas isn't included yet, so you will have to do your Voronoi Diagrams and Delauney Triangulations the old fashioned way through pure python and/or numpy or MatPlotlib.  Ooops, I forgot, most of you don't have it since you aren't using ArcGIS Pro yet, so you won't miss it (you will have to install Pro to get python 3.4.x and the SciPy stack with other goodies).

 

But how cool... standing at the bus stop... whip out the iPad and run a quick triangulation in Matplotlib... how to impress potential employers who might be watching.

 

image.png

I can't wait until esri gets something that allows me to do analysis on the iThings so I can work with arcpy, numpy and all my favorite libs and pys.  Imagine being able to graph data in the field as it is being collected...

image.png

 

 

Two screen grabs just to give you a quick view of other stuff.  I will update when I play with the new version more.

 

image.png

 

 

Sample script, using a partial split screen for output on an iPad.  Imagine how cool this would look on your iPhone

 

image.png

 

Alternate theme, for the moody programmer...

image.png

 

The python 2.7 and 3.5 distributions have their own access points with their own site_packages folder and even a shared one.  Kind of like a version of condas... maybe iCondas...

image.jpeg

 

Later

... Dan

A visual demo.  Nothing fancy, more for the record.

 

1 Begin by creating a toolbox

tbx00.png

 

2  Select Analysis Tools

tbx01.png

 

3  Add the script to the toolbox once added

tbx02.png

 

4  Now start filling out the dialog's General parameters.

tbx03.png

 

5 The actual Parameters section needs to be created to include Label, Name, Data Type, whether it is

    required,  optional or derived and whether it is an input or output parameter.

tbx04.png

 

Kind of obvious what needs to be done... elapsed time so far, 2 minutes.

tbx05.png

 

You can fill out tool validation if you want.  I personally don't bother unless someone is paying.

tbx06.png

 

8  Time for the help stuff...this is important though

tbx07.png

 

Yes... the less than obvious save button will make things good. 

    You can include all kinds of stuff, like images etc.

tbx08.png

 

10 Ready to roll.

tbx09.png

 

11  Oh yes... the handy little 'i' symbol shows the tool help if you just need a quick tip on the input parameters.

tbx10.png

 

 

Total elapsed time from start to finish, less than 5 minutes.

Dan_Patterson

The ...py... links

Posted by Dan_Patterson Champion May 8, 2016

Newest Blog posts and Updates 

 

2017-09-21 Whats new in Python 3.7 New

2017-09-01 Python Data Science Handbook (free) New

2017-08-29 ArcGIS API for Python 1.2.1 update New

2017-08-22 ArcGIS Pro 2.0.1 patch released... issues addressed New

2017-08-20 Are Searchcursors Brutally Slow? ...they need not be New

2017-08-17 Geospatial Analysis with Python GeoJSON and GeoPandas.  New and cool for those that don't know

2017-08-10 Understanding data types when using Pandas  New

2017-08-08 How to Extract Raster Values at Point Locations  New

2017-08-08 Pandas can be slow... options to speed things up  New

2017-08-02  ArcGIS Pro Essential Workflows - updated  New

2017-07-18  ArcGIS API for Python 1.2 update  New

2017-07-14 ArcGIS Pro 2 Issues Addressed New

2017-07-09 Arcgis Pro 2... Jupyter notebook setup basics  New

2017-06-29 ArcMap 10.5.1 Issues Addressed  New

2017-06-27 ArcGIS Pro 2.0 and ArcMap 10.5.1 New

2017-06-08 Numpy 1.13 released  New

2017-06-08  Machine Learning and Python Cheat Sheets (27 in all)  

 

.....  See the categories below for older posts

 

The Bug List and Change logs

Product information, main link including: Release notes, generic link to version fixes

ArcMap main link  http://support.esri.com/Downloads

ArcGIS ...:      :. :10.5.1 issues addressed  :.10.5.1 changes...  :. version 10.5 fixes... 

                       :. version 10.4.1 fixes   version 10.4 fixes...   

 :.  version 10.3 fixes...  :.  version 10.2 fixes...

ArcGIS Pro... :.  ArcGIS Pro 2.0.1 patch released... issues addressed , Pro 2.0 Issues addressed :. Pro 2.0 goes live,

                       :.  1.4 Issues addressed,  1.4 release notes 

ArcGIS Python API  :.. ArcGIS API for Python.. V1.2  New

Numpy ...:      :.  all versions  latest version 1.13   New

Python ...:      :. from 3.7 back  updated with  New

SciPy   ...:      :. release notes all versions 

MatPlotLib...: :. version 2 release

Pandas......    :. version 0.20 release

 

To be updated as I see fit.  I have posted a number of blog posts and documents and list them in reverse order.

They are categorized as follows:

------------------------------------------------------------------------------------------------------------------------

My toolboxes on ArcScripts 2.0 Beta  Link...

 

-------- Analysis:

 

-------- Scripting and associated activities:

 

-------- Raster and array analysis:

 

-------- Graphing, Data and Statistics:

 

-------- Esoterica:

------------------------------------------------------------------------------------------------------------------------

-------- Documentation...

 

ArcGIS Pro Essential Workflows     ArcGIS Pro Essential Workflows - updated 

Python 2 and 3 key differences       key differences link

Main documentation page                Documentation | ArcGIS for Desktop

Arcpy and geoprocessing                What is geoprocessing?—Help | ArcGIS for Desktop

A good source of code examples    http://stackoverflow.com/

Esri at GitHub                                    Esri GitHub ...

Python home page for all versions   Welcome to Python.org

NumPy/SciPy documentation          NumPy — Numpy

Python for iThingy's                          Pythonista    comes with matplotlib, numpy and loads more

Advanced Python notes                    Python Scientific Lecture Notes — Scipy lecture notes

Matplotlib the graphing package       matplotlib: python plotting — Matplotlib 1.4.3 documentation

PythonPedia                                      https://pythonpedia.com/

Awesome Python                              http://awesome-python.com/

ArcGIS Pro, SciPy, Python...            Python en ArcGIS Pro - CCU2015.pdf

SciPy Lecture Notes                          Scipy Lecture Notes

How to make mistakes in python    How to make mistakes in Python... a useful link

--------

Masks ... nodata ... nulls

The attached pdf will serve for now.  I will add additional documentation here on how to work with rasters with nodata areas here soon.

 

A simple example here, using Matplotlib to do the mapping.  The -- cells are nodata values

FullSizeRender-1 1.jpg

Raster/array valuesSample properties
>>> print(c.filled())  # -9 is nodata
[[ 3  1  3 -9  3  2 -9 -9 -9  2  2  2  3 -9  1 -9  3  3 -9  1]
 [ 1  3  2  3 -9 -9 -9  3  1  3  3 -9  3  2  3  3  3  3  1  1]
 [ 2  2  2 -9  1  2  2 -9  2  2  3 -9  3 -9  3 -9  2  2  1  3]
 [ 3  1  3  2 -9 -9  2 -9  2 -9 -9  1  2  1  1  2  1  3  3 -9]
 [-9  2 -9  2 -9 -9  1  2  1 -9 -9  2  2  1  1  1  3  1  3  3]
 [-9  2  3  3  1  2  2  3 -9  1  1  3  1 -9 -9 -9  2  1  3 -9]
 [-9 -9 -9  2  1  1 -9  2  2  1  1  2  2  3  2  3  2  2  2 -9]
 [ 3 -9  3 -9 -9  2  3 -9  3  2  2  2  1 -9  3  2 -9  2  2  1]
 [ 1 -9  2  2  1 -9  2  1  2  2 -9 -9  3 -9  2  2 -9  1 -9  1]
 [ 1  1 -9 -9 -9  2 -9  2  3  2 -9  1  2  1  3  1 -9 -9  1  3]
 [ 1 -9 -9  1  2  1 -9  1 -9 -9 -9 -9  1 -9 -9 -9 -9  2 -9  3]
 [-9  3 -9 -9 -9  2  3 -9 -9  1  2  1  1  2  1  1  3  2  3  2]
 [ 3  3  3 -9  3  1  3 -9 -9 -9  3  2 -9 -9  3  2  3 -9  1 -9]
 [-9  2  2  3  3  1  3  1 -9 -9  2  3  3  1  1  1  1  1 -9  1]
 [-9  1 -9  3  1  1 -9  2 -9  1  1  2  2  1 -9  2  2  3 -9  3]
 [ 3  2 -9  2  2 -9 -9  2  1  2  1  2 -9  3  2  1  1  1  3  1]
 [-9 -9  3  2  2 -9  2  2  1  2 -9  3  1  2  2 -9  3  3  2  1]
 [ 1 -9  1 -9  2  2  3 -9  3  2  2 -9  1  2  3 -9 -9 -9  3  3]
 [ 3  1  1  1  2  1  2 -9 -9  2  2  1  2 -9 -9 -9  2 -9  3  2]
 [-9  2  3  1 -9  1  1 -9  2 -9  1  1  1  2  1  2  3 -9 -9  3]]

>>> c.mean()

1.96028880866426

>>> c.min()  = 1

>>> c.max() = 3

>>>

np.histogram(c, bins=[1,2,3,4])

(array([ 92, 104,  81]),

array([1, 2, 3, 4]))

There are a large number of questions that deal with some aspect of distance.  If you are interested in quick distance calculations, then download and read the attachment.

The tools are builtin to arcpy to get the data into this form to facilitate calculations.

You can determine distances using different metrics... euclidean distance is shown here.

 

If you need to find the nearest, a list of closest objects or to calculate the perimeter of a polygon or length of a line represented by a sequence of points, there are other options available to you.

 

Associated references:

 

Python Near Analysis (No Advanced Licence)

 

Single origin to multiple destinationsMultiple origin to multiple destinations

 

The example here uses a single origin in an origin list.  You will note that the coordinates are given as a list, of lists, of coordinates.

Example 1...

Single origin...
[[ 0.  0.]]
Multiple destinations
[[ 4.  0.]
 [ 0.  2.]
 [ 2.  2.]
 [-2. -3.]
 [ 4.  4.]]
Distances: [ 4.    2.    2.83  3.61  5.66]

 

An example for 3D length/distance

X, Y and Z values array, their differences and the resultant distances and the length if they formed a circuit

e_leng(d3d,verbose=True)
Input array....(shape=(1, 8, 3))
[[[ 0.  0.  0.]
  [ 1.  1.  1.]
  [ 0.  1.  0.]
  [ 1.  0.  1.]
  [ 0.  1.  1.]
  [ 1.  1.  0.]
  [ 1.  0.  0.]
  [ 0.  0.  1.]]]
differences...


[[[-1. -1. -1.]
  [ 1.  0.  1.]
  [-1.  1. -1.]
  [ 1. -1.  0.]
  [-1.  0.  1.]
  [ 0.  1.  0.]
  [ 1.  0. -1.]]]
distances...
[[ 1.73  1.41  1.73  1.41  1.41  1.    1.41]]
length...[ 10.12]

This example  uses the same destinations.

 

Example 2...
Multiple Origins...
[[[ 0.  0.]]
 [[ 1.  0.]]

 [[ 0., 1.]]
 [[ 1.  1.]]]

Destinations...
[[ 4.  0.]
 [ 0.  2.]
 [ 2.  2.]
 [-2. -3.]
 [ 4.  4.]]

Distances...
[[ 4.    2.    2.83  3.61  5.66]
 [ 3.    2.24  2.24  4.24  5.  ]
 [ 4.12  1.    2.24  4.47  5.  ]
 [ 3.16  1.41  1.41  5.    4.24]]

Origin-Destination, distance matrix
dests->:     0     1     2     3     4
origins
      0: [ 4.    2.    2.83  3.61  5.66]
      1: [ 3.    2.24  2.24  4.24  5.  ]
      2: [ 4.12  1.    2.24  4.47  5.  ]
      3: [ 3.16  1.41  1.41  5.    4.24]

 

That's all for now...

I always forget.  I read about something pythonic, then quasi-remember what it was I read.  So this is what is coming to python.  A look into the future.  Preparing for the future.

 

Python 3.0 released  on December 3rd, 2008.

Python 3.6 expected final  December 12th, 2016    8 years old!

Python 3.7 beta released  September 23rd, 2017

 

New Additions:                                                                             Update: 2017-06-28

Python 3.7.0a math.remainder ...

Numpy 1.13 release notes  updated

Apparently the wait will continue arcmap 10.5 python version??

Recent blog  not using python 3?.... 

Python 3.7 beta...  Overview — Python 3.7.0a0 documentation 

Conda tips and tools...  Conda and ArcGIS Pro | ArcPy Café 

Python 3.6 final coming soon... sorted dictionary keys and more  Release information and changes

Anaconda, Spyder and PRO Anaconda, Spyder and ArcGIS PRO

Python & Pro  Python and ArcGIS Pro 1.3 : Conda

Anaconda  Download Anaconda now! | Continuum

SciPy.org    SciPy.org — SciPy.org latest release for numpy 1.11.1 and SciPy 0.17.1

Learning resourcesData Science in Python

        GitHub - Data Science Python: common data analysis and machine learning tasks using python

 

A classic example is those of you that have avoided reading:

 

  Python Mini Formatting Language .

 

Oh wait!! Many of you are still using python 2.7.  Good news! Install ArcGIS PRO and enter the realm of Python 3.4.  Yes I did say 3.4. but it gets worse.... Python 3.7 is in beta as of Sept 2016.

 

Python 2.7.x is the last of the 2.x series.  It will remain on support for some time (to indefinitely), but there have been lots happening since Python 3.0 was introduced (homework... when was python 3.0 introduced).

 

So this is not going to be a retrospective going back to python 3.0 and what it introduced, but I will highlight what is going forward in the python world since python 3.6 is set for final release soon.. and you will be using python 3.4 ... IF you install ArcGIS PRO or dabble in the neverworld of alternate installs.  So in reverse order, this is what I think will be useful for you to look forward to, and what you may have missed.

 

python_help.png

 

-------------------------------------------------------------------------------------------------

Contents : Python  NumPy  SciPy  Matplotlib  Pandas       Update:  2017-02-18

Last Update:   numpy 1.12 and scipy 0.17.1 and

   added Pandas 0.18.1 release What’s New — pandas 0.18.1 documentation

--------------------------------------------------------------------------------------------------

GitHub section

Esri             GitHub - Esri/esri.github.com: Esri on Github

Matplotlib   GitHub - matplotlib/matplotlib: matplotlib: plotting with Python

Numpy        GitHub - numpy/numpy: Numpy main repository

Pandas       GitHub - pydata/pandas: Flexible and powerful data analysis ...

SciPy          GitHub - scipy/scipy: Scipy library main repository

Sympy        GitHub - sympy/sympy: A computer algebra system written in pure Python

--------------------------------------------------------------------------------------------------

Python section

The main link  What’s New in Python ... this goes back in history

Pre-existing functionality  What is in version 3 that existed in version 2.6

with statement, print as a function, io module

 

----- python 3.7 ---------------------------------------------------------------------------------------
What’s New In Python 3.7 ... main page
Highlights

----- python 3.6 ---------------------------------------------------------------------------------------

What’s New In Python 3.6 ... main page

Highlights

>>> name = "Fred"

>>> f"He said his name is {name}."

'He said his name is Fred.'

---- python 3.5 -------------------------------------------------------------------------------------------

What’s New In Python 3.5 ...main page

  • Highlights

>>> *range(4), 4                        (0, 1, 2, 3, 4)

>>> [*range(4), 4]                      [0, 1, 2, 3, 4]

>>> {*range(4), 4, *(5, 6, 7)}      {0, 1, 2, 3, 4, 5, 6, 7}

>>> {'x': 1, **{'y': 2}}                 {'x': 1, 'y': 2}

 

  • Improved Modules
    • link includes, but not limited to:
      • collections, csv, distutils, doctest, enum, inspect, io, json, locale, logger, math, os, pathlib, re, readline, sys, time, timeit, trackback, types, zipfile
      • collections.OrderedDict is now implemented in C which makes it 4 to 100 times faster.

      • csv.write_rows now supports any iterable
      • enum.Enum now supports a start number for text enumeration
      • math ... added nan and inf constants (finally!!!) and gcd (get common divisor)
      • os.walk  significant speed improvements

 

---- python 3.4 -----------------------------------------------------------------------------------------

What's New in Python 3.4 ... main page

  • Highlights

>>> import statistics

>>> dir(statistics)

[ .... snip ... 'mean', 'median', 'median_grouped', 'median_high', 'median_low', 'mode', 'pstdev', 'pvariance', 'stdev', 'variance']

  • Improved Modules
    • link includes, but not limited to:  collections, inspect, multiprocessing, operator, os, re, sys, textwrap, zipfile
      • textwrap adds   maxlines, placeholder and shorten

---- python 3.3 and before  -------------------------------------------------------------------------------

Previous stuff (aka pre-3.4)

  • porting to 2to3            2to3 - Automated Python 2 to 3 code translation
    • changes to or replacement of :  apply, asserts, dict (dict.items(), dict.keys(), dict.values()), exec, has_key, isinstance, itertools, long, map, next, nonzero, operator, print, raise, range, raw_input, unicode, xrange, zip
  • virtual environments    pep 405
  • python launcher          pep 397
  • ordered dictionaries    pep 372
  • print as a function       pep 3105
  • sysconfig                    Python’s configuration information
  • argparse
  • order dictionaries
  • print statement
  • division  / float, // integer

way more ..... you will just have to read the original migration docs.

Significant changes to text handling and the migration to full unicode support.

Some of the changes have been implemented in python 2.7 and/or can be accessed via ... from future import ....

 

-------------------------------------------------------------------------------------------------

Anaconda section

Anaconda, Spyder and ArcGIS PRO

Python and ArcGIS Pro 1.3 : Conda

Download Anaconda now! | Continuum

-------------------------------------------------------------------------------------------------

NumPy section

The reverse chronological list of changes to numpy Release Notes — NumPy v1.12 Manual

numpy 1.12- einsum optimized for speed

- keepdims added to many functions

- axis keyword for rot90

- flipud and fliplr now have axis generalization

- nancumsum and nancumprod added to the nan line of functions

- too many more to list

numpy 1.10

np.rollaxis, np.swapaxis  now return views

np.ravel, np.diagonal, np.diag  now preserve subtypes

recarray field and view  changes in how data are treated, see documentation

matrix @ operator implemented in keeping with its implementation in python 3.5

-------------------------------------------------------------------------------------------------

SciPy section

Too many to list, but the change logs are given here Release Notes — SciPy v0.17.0 Reference Guide

SciPy central  http://central.scipy.org/

-------------------------------------------------------------------------------------------------

Matplotlib section

users guide        User’s Guide — Matplotlib 1.5.1 documentation

chronologic change history   What’s new in matplotlib — Matplotlib 1.5.1 documentation

-------------------------------------------------------------------------------------------------

Pandas section

Pandas 0.18.1         What’s New — pandas 0.18.1 documentation

The full list is here   pandas main page and release notes