ArcGIS Online Item Usage export

6737
4
01-03-2017 07:48 AM
IanShiland2
New Contributor

Hello everyone, 

We are looking to export the usage of applications on our AGO site - not credit usage but actual views and associated analytics. It appears you can only view this data and not export it in any way. It would be great if we could somehow export the data via python. Attached image shows what we are after. Thanks!

4 Replies
AdrianWelsh
MVP Honored Contributor

Hi Ian,

I believe this is not available yet, but is something that has been looked at, from what I can tell. Take a look at this Idea (and consider voting it Up):

Access Raw (Rest API) data from AGOL Feature Service usage statistics 

There are also many other Ideas with similar thoughts that would be wise to vote up. Also, this thread may be best placed in the ArcGIS Online‌ space.

(another idea that may be worth voting up: https://community.esri.com/ideas/5439 )

IanShiland2
New Contributor

Thanks I just voted that idea up. Not sure this thread will go anywhere in the AGO space but mods are welcome to move it. 

0 Kudos
ThomasColson
MVP Frequent Contributor

I am convinced that this and this will do as you ask, but would require some python tinkering that is beyond my skill level. I too, am looking for the same thing. I have hundreds of hosted feature services, and we need a more automated/technical way of getting those into a spreadsheet so we can globally look at service usage and make management decisions on how or when we update data. Having to click the usage button on 400 different item description pages then screen-shooting the graph does not meet that need. Surprised with such a well-documented REST API, that Esri has not included this basic requirement. 

ThomasColson
MVP Frequent Contributor

The following, albeit a bit bulky, will read from a list of hosted feature services and create a csv for each with date and service calls per date. You can change the date range by changing the epoch date stamps. 

import csv
import urllib
import urllib2
import json
import datetime
# need to get a token
tokenURL = 'https://www.arcgis.com/sharing/rest/generateToken'  
params = {'f': 'pjson', 'username': 'username', 'password': 'password', 'referer': 'https://www.arcgis.com'}  
req = urllib2.Request(tokenURL, urllib.urlencode(params))  
response = urllib2.urlopen(req)  
data = json.load(response)  
token = data['token']
# read list of AGOL service names
readCSV = list(csv.reader(open('service.csv')))
names = []
for row in readCSV:
    name = row[0]
    #generate url list of up to 7 api calls
    #Esri truncates usage stats into blocks of approx. 52 days
    #Making a single url call for entire year will fail
     #replace org with your org name and org_id with your org id
    url1 = 'https://org.maps.arcgis.com/sharing/rest/portals/org_id/usage?f=json&startTime=1451624400000&endTime...'+(name)+'&stype=features&token='+token
    url2 = 'https://org.maps.arcgis.com/sharing/rest/portals/org_id/usage?f=json&startTime=1456808400000&endTime...'+(name)+'&stype=features&token='+token
    url3 = 'https://org.maps.arcgis.com/sharing/rest/portals/org_id/usage?f=json&startTime=1461988800000&endTime...'+(name)+'&stype=features&token='+token
    url4 = 'https://org.maps.arcgis.com/sharing/rest/portals/org_id/usage?f=json&startTime=1467172800000&endTime...'+(name)+'&stype=features&token='+token
    url5 = 'https://org.maps.arcgis.com/sharing/rest/portals/org_id/usage?f=json&startTime=1472356800000&endTime...'+(name)+'&stype=features&token='+token
    url6 = 'https://org.maps.arcgis.com/sharing/rest/portals/org_id/usage?f=json&startTime=1477540800000&endTime...'+(name)+'&stype=features&token='+token
    url7 = 'https://org.maps.arcgis.com/sharing/rest/portals/org_id/usage?f=json&startTime=1482728400000&endTime...'+(name)+'&stype=features&token='+token
    #Make 7 url calls for year. Will fail if 7 calls and feature was posted mid-year
    #so try blocks will keep trying less url calls
    try:
        print 'Reading the usage statistics for '+(name)+' for 2016....'
        filename = (name)+'.csv'
        data = json.loads(urllib.urlopen(url1).read())['data'][0]['num']
        data = data + json.loads(urllib.urlopen(url2).read())['data'][0]['num']
        data = data + json.loads(urllib.urlopen(url3).read())['data'][0]['num']
        data = data + json.loads(urllib.urlopen(url4).read())['data'][0]['num']
        data = data + json.loads(urllib.urlopen(url5).read())['data'][0]['num']
        data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
        data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
        print 'Writing '+(filename)+'....'
        with open(filename, 'wb') as output:
            csv_writer = csv.writer(output)
            csv_writer.writerow(['date',(name)])
            for date, count in data:
                csv_writer.writerow([datetime.datetime.fromtimestamp(int(date)/1e3), count])
    except Exception:
            try:
                print 'Reading the usage statistics for '+(name)+' for the last 313 days of 2016....'
                filename = (name)+'.csv'
                data = json.loads(urllib.urlopen(url2).read())['data'][0]['num']
                data = data + json.loads(urllib.urlopen(url3).read())['data'][0]['num']
                data = data + json.loads(urllib.urlopen(url4).read())['data'][0]['num']
                data = data + json.loads(urllib.urlopen(url5).read())['data'][0]['num']
                data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
                data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
                print 'Writing '+(filename)+'....'
                with open(filename, 'wb') as output:
                    csv_writer = csv.writer(output)
                    csv_writer.writerow(['date',(name)])
                    for date, count in data:
                        csv_writer.writerow([datetime.datetime.fromtimestamp(int(date)/1e3), count])
            except Exception:
                    try:
                        print 'Reading the usage statistics for '+(name)+' for the last 261 days of 2016....'
                        filename = (name)+'.csv'
                        data = json.loads(urllib.urlopen(url3).read())['data'][0]['num']
                        data = data + json.loads(urllib.urlopen(url4).read())['data'][0]['num']
                        data = data + json.loads(urllib.urlopen(url5).read())['data'][0]['num']
                        data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
                        data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
                        print 'Writing '+(filename)+'....'
                        with open(filename, 'wb') as output:
                            csv_writer = csv.writer(output)
                            csv_writer.writerow(['date',(name)])
                            for date, count in data:
                                csv_writer.writerow([datetime.datetime.fromtimestamp(int(date)/1e3), count])
                    except Exception:
                            try:
                                print 'Reading the usage statistics for '+(name)+' for the last 209 days of 216....'
                                filename = (name)+'.csv'
                                data = json.loads(urllib.urlopen(url4).read())['data'][0]['num']
                                data = data + json.loads(urllib.urlopen(url5).read())['data'][0]['num']
                                data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
                                data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
                                print 'Writing '+(filename)+'....'
                                with open(filename, 'wb') as output:
                                    csv_writer = csv.writer(output)
                                    csv_writer.writerow(['date',(name)])
                                    for date, count in data:
                                        csv_writer.writerow([datetime.datetime.fromtimestamp(int(date)/1e3), count])
                            except Exception:
                                    try:
                                        print 'Reading the usage statistics for '+(name)+' for the last 157 days of 2016....'
                                        filename = (name)+'.csv'
                                        data = json.loads(urllib.urlopen(url5).read())['data'][0]['num']
                                        data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
                                        data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
                                        print 'Writing '+(filename)+'....'
                                        with open(filename, 'wb') as output:
                                            csv_writer = csv.writer(output)
                                            csv_writer.writerow(['date',(name)])
                                            for date, count in data:
                                                csv_writer.writerow([datetime.datetime.fromtimestamp(int(date)/1e3), count])
                                    except Exception:
                                            try:
                                                print 'Reading the usage statistics for '+(name)+' for the last 105 days of 2016....'
                                                filename = (name)+'.csv'
                                                data = json.loads(urllib.urlopen(url6).read())['data'][0]['num']
                                                data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
                                                print 'Writing '+(filename)+'....'
                                                with open(filename, 'wb') as output:
                                                    csv_writer = csv.writer(output)
                                                    csv_writer.writerow(['date',(name)])
                                                    for date, count in data:
                                                        csv_writer.writerow([datetime.datetime.fromtimestamp(int(date)/1e3), count])
                                            except Exception:
                                                    try:
                                                        print 'Reading the usage statistics for '+(name)+' for the last 53 days of 2016....'
                                                        filename = (name)+'.csv'
                                                        data = json.loads(urllib.urlopen(url7).read())['data'][0]['num']
                                                        print 'Writing '+(filename)+'....'
                                                        with open(filename, 'wb') as output:
                                                            csv_writer = csv.writer(output)
                                                            csv_writer.writerow(['date',(name)])
                                                            for date, count in data:
                                                                csv_writer.writerow([datetime.datetime.fromtimestamp(int(date)/1e3), count])
                                                    except:
                                                        print 'No usage statistics available too short time range'
                                                        pass
                    

                 
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍