IDEA
|
There are a variety of reasons org admins might not want to indefinitely retain content items on the Software-As-A-Service ArcGIS Online organization - cumulative burn of storage credits figuring prominently, as well as the fact that much hosted content becomes stale over time (especially at universities, as named users are transient). What I propose is a mechanism to offload/export/archive the hosted content to a local ArcGIS Enterprise instance that would accomplish the following: 1) Ahead of time, the organizations IT/GIS support folks stand up a dedicated ArcGIS Enterpise site with adequate storage. That AGE site is then set up in a collaboration with the SaaS AGOL org. 2) In AGOL, a content owner clicks an "Archive" button on the item details page of a large hosted feature layer. 3) A new feature service is created on the ArcGIS Enteprise Site from the hosted layer in AGOL with all the data from the hosted layer. 4) In AGOL, the ItemID remains unchanged *(this is critical) but now points *by reference* to the service on the ArcGIS Enterprise "archival" server through the collaboration. 5) Finally, in AGOL, the hosted feature service is deleted, freeing up storage credits. 6) Assuming no changes to ItemIDs, the content can still be referenced from dependent Web Maps with no configuration changes. 7) Should it become desirable or necessary to move archived content BACK to SaaS AGOL, then the process above should be reversible: recreate hosted feature service, use same ItemID, remove reference link. The situations in which this might be useful are many. Here are a few: - A feature service outgrows the SaaS environment because it's using up lots of storage credits. But, lots of web maps reference it by ID. It could be archived to ArcGIS Enterprise, and with no disruption to the dependent Web Maps if ItemIDs are retained. - University org adminis develop a rubric for determining "stale" content. They implement it, identifying several hundred feature services, and exporting those to ArcGIS Enterprise. Of the many services archived, three people complain, and for those special cases the admins then reverse the process to replace the reference links to the off-platform ArcGIS Enterprise site with a physical hosted feature service on ESRI's SaaS infrastructure. - An organization decides to clone their entire AGOL organization to ArcGIS Enterprise. Collaborations are probably not the only way this functionality could be implemented.
... View more
02-10-2023
10:15 AM
|
4
|
0
|
284
|
IDEA
|
ItemIDs are used to reference content in ArcGIS Online. They are guaranteed unique in the SaaS AGOL namespace, and are immutable once created, but the issue is that if a content item is deleted, then re-created, it will have a new ItemID, and all dependent items (e.g., Web Maps, Applications) will have to be manually updated. If, in contrast, it were possible to have some layer of indirection between the hard-coded ItemID and the applications that use it, then the following becomes a possibility: 1) Bob creates a feature service, ItemID 12345, but assigns it a mutable alias "AuthoritativeData123", much like esriURL.com or any other URL shortener. 2) Then, Alice makes a Web Map using the alias (ideally the system defaults to showing one if it exists, or using physical ItemIds if not). 3) Bob deletes ItemId12345 by mistake, but realizes it and can re-create the content as a new Item. However, it's ItemID is now 67890. BUT, he uses the same ItemID alias "AuthoritativeData123", so Alice's Web Map still works, and furthermore Bob does not need to track down all of the likely numerous dependencies and get their owners to update their ItemID references. This is a bit like the concept of a C record in DNS. The physical A record of the host referenced by a CNAME can change, but this is invisible to users who only know the C alias, which is usually more human-readable. This allows administrators to change out hardware, VMs, move workloads, etc, without coordinating with numerous users who have a hardcoded reference to a particular machine. I'm proposing that ItemIDs be treated in a similar manner - just like in DNS, if you don't want to create C records, referencing a host by the A record still works just fine, but the C records give you an extra layer of abstraction if you need it to change something at the host level. In this scenario, users might choose to create ItemID aliases for important content, and then the AGOL system would need to look and see if an ItemID alias exists, and if so, use it in place of the hardcoded ItemID.
... View more
02-10-2023
09:38 AM
|
3
|
0
|
221
|
IDEA
|
It is possible to e-mail all users in an ArcGIS Online org or Portal for ArcGIS site using the ArcGIS Python API. Here's a link to a sample Jupyter notebook that can be modified and executed by a non-programmer, albeit carefully, inside of ArcGIS Notebooks as an org admin. It's based on what I use in production to do mass notification at Virginia Tech. @WStreet @RobertBorchert this does not require any third-party access to the portal. Feel free to adapt this to suit your needs. https://gist.github.com/sspeery/e930642292ca5212fc3c23b3b983c46e
... View more
02-10-2022
07:40 AM
|
0
|
0
|
1331
|
POST
|
I'd like to add my full agreement to everything Peter has taken the time to point out in this discussion. At my institution, we follow a similar approach to Peter (using a modified version of the same Python code), by running a script within a cron job that looks for new users and then assigns them the entitlements that can't be set as defaults via the AGOL administrative GUI. As Peter points out, our approach is a bit of a workaround, insofar as we assign each auto-provisioned Enterprise Login user a temporary "New User" role that serves as a flag for the ArcGIS Python API script to enumerate the new users since the last run, and then assign the entitlements that, by all rights, ought to be available to the named users at the moment of account provisioning, instead of after a delay of a few minutes between cron job runs. The point raised in this thread that Enterprise-grade SaaS solutions should not require out-of-band configuration tweaks to deliver basic functionality is one that's worth echoing. The promise of Enterprise Logins is that an institution can bring their own identity management infrastructure and with that, the expectation that when a user enters the system for the first time, they'll actually be able to use it. This is not currently the case; the Python API user management workflows implemented at Michigan and Virginia Tech are performing the function of completing the delivery of baseline functionality, not, as would normally be expected of such under the hood tinkering, enhancing functionality in ways that a SaaS vendor could not reasonably be expected to deliver by default at user provisioning time. From an IT process standpoint, the fact that ArcGIS Online doesn't "work" out of the box (in the sense that a named user, once provisioned, can immediately access and fully leverage the capabilities of the platform) means that another layer of software (our Python API workarounds) and the systems that run it must be jointly managed with the SaaS components, and because our out of band code is not an integral part of the platform, its existence, importance, or operating principles may not be not immediately obvious to others who might be tasked with administering the system in the case of staff turnover. Certainly, documentation of modifications to enterprise systems is an integral part of performing such modifications, and that's on us, but the point here is that the administrative footprint of AGOL is larger than meets the eye as a "wrapper" of utility scripts are grafted around the SaaS core to redress the deficiencies of the latter, and the possible ways the system can break expand with the introduction of custom utility scripts that may have their own, non-obvious failure modes. I would go a step farther than Peter in advocating for yet another piece of missing functionality - the ability to pre-provision named users from an Enterprise Login identity store without any action on the part of the user themselves. Currently, brittle and hackish as it may be, our Python-based entitlement assignment workaround does solve the "linear scalability" problem of entitlement assignment, and provides named users with the illusion that everything "just works" (within 0-5 minutes of them logging in for the first time) but, critically, this only happens after the user has logged into the organization for the first time. Frequently, we as org admins get requests to set up groups for classes, research teams, special projects, etc., and regardless of whether one is using an invitation workflow or named user auto-provisioning, we can't do anything (add entitlements, assign group membership, etc) until the user (who may be on a geographically dispersed team, have varying levels of IT literacy, or lack a compulsive e-mail checking habit) has taken the affirmative step of pointing their browser at our organization. Since we are bringing our own identities with Enterprise Logins, we know who these people are, but we lack the ability to initiate the process that links Enterprise Identity <user@vt.edu> with Named User <user_virginiatech>, which is a prerequisite step to entitlement assignment and everything that is currently being discussed in this thread. There are best practices for identity and license management in the broader non-geospatial SaaS enterprise systems space, and while, to ESRI's credit, they have incorporated many of these over the years, a good development target for future AGOL releases would be a state of affairs in which a named user can be created from a previously known enterprise identity and fully endowed with the privileges they should have as a user of ArcGIS Online without either a) end user intervention or b) out of band scripting. -- Seth Peery, Virginia Tech
... View more
12-11-2018
10:11 AM
|
3
|
1
|
1083
|
IDEA
|
It would be very useful for ArcGIS Online administrators to be able to automatically allocate a named user in an AGOL organization that uses Enterprise Logins, without having to send that user an invitation. Then we could go ahead and add those users to the appropriate groups at account provisioning time, instead of having to wait for them to accept the invitation. This would be similar to the functionality in group management whereby a group owner can automatically add current enterprise logins to a group without requiring e-mail confirmation. If this is not feasible, the next best thing would be the previously suggested Idea to https://community.esri.com/ideas/11677
... View more
11-10-2016
12:56 PM
|
6
|
0
|
298
|
POST
|
Jeremy, I'm not currently using replication, but would be happy to discuss this with you further offline to keep the current thread on-topic. Send me an e-mail directly if you're still interested and we can chat further. You can find all my contact information at http://gis.vt.edu. ___________________________________________________ Seth Peery Senior GIS Architect, Enterprise GIS Virginia Tech Geospatial Information Sciences Virginia Polytechnic Institute and State University Hi Seth- It's Jeremy Hazel, we worked together fall semester in 2009. I'm interested to know if you are using any replication at all or are you only maintaining a single database? I'm having some issues with compressing a replicated database and thought maybe you could help. I've posted a thread on this forum here http://forums.arcgis.com/threads/26048-SDE-compression-and-One-way-replication. We are using the sdetable -o update_dbms_stats within a batch file here before and after our compression operation but are not updating near as many tables as you. We are specifying the tables to update stats on individually. Would it be worth it for you to use python to iterate through the table_registry, grab the name of each table and write it out to the -t option of the sdetable -o update_dbms_stats command in a txt file, appending a new command for each table and then run the resulting txt file as a batch process? Just a thought...
... View more
04-15-2011
12:02 PM
|
0
|
0
|
355
|
POST
|
Thanks to all who responded. After considering your replies and advice, I wrote the following bash shell script to perform the batch analyze operation. I've tested it on my ArcSDE instance of ~1300 objects, and found that it works pretty well in my environment. I'm releasing what I've created back to the community in hopes that someone else will find it useful. I've tried to generalize the script as much as possible to facilitate reuse by allowing users to configure the necessary input parameters up front. If this is the wrong place for this (ArcScripts, perhaps?) I can post it elsewhere. Any comments and suggestions for improvement are welcome.
#!/bin/bash
##########################################################################
# Batch analyze script
# Query ArcSDE to get a list of all datasets in the database, then
# iterate through each owner.dataset to update DBMS statistics (ANALYZE).
# Output is logged to a file and optionally, a summary may be e-mailed to
# the ArcSDE administrator.
# Tested to work with ArcSDE/Oracle on RHEL 5.
# Author: Seth Peery, Virginia Polytechnic Institute and State University
# E-mail: sspeery@vt.edu
# Last Modified 2011-04-15
# Licensed under a Creative Commons Attribution 3.0 Unported License
# http://creativecommons.org/licenses/by/3.0/
##########################################################################
# PREREQUISITE:
# Set the following variables to match your environment
# N.B. be sure to chmod the referent of PASSWORDFILE to 600!
OUT_DIR=/path/to/script/output # logfile destination
PASSWORDFILE=$OUT_DIR/sdepass.txt # a single-line file with sde password
SERVER_NAME="myserver.example.com" # hostname of SDE server
DBMS_INSTANCE="my_TNS_entry" # TNS entry for SDE's Oracle Instance
SEND_EMAIL=true # set to true to send $SUMMARYFILE
ADMIN_EMAIL="foo@example.com" # admin's e-mail address
LAYERLIST=/tmp/allsdelayers.txt # no real need to change this
# Now, process the above to set password and output location variables.
PASSWORD=$(cat $PASSWORDFILE | head -n 1)
LOGFILE=$OUT_DIR/analyze_log_$(date +%F)
SUMMARYFILE=$OUT_DIR/analyze_summary_$(date +%F)
# Write the opening banner. Note that the first line truncates
# any existing logfile with the same name.
# If you want this to operate silently,
# replace 'tee -a' with '>>' and 'tee' with '>'.
echo '********************************************' | tee $SUMMARYFILE $LOGFILE
echo '******* ArcSDE Batch Analyze Utility *******' | tee -a $SUMMARYFILE $LOGFILE
echo '********************************************' | tee -a $SUMMARYFILE $LOGFILE
echo 'Analyze operation initiated ' $(date) | tee -a $SUMMARYFILE $LOGFILE
echo "Logging output to $LOGFILE" | tee -a $SUMMARYFILE
# Here's the SQL command we send to SQL*PLUS
# to get a list of all the objects in sde.table_registry
{
echo "set pagesize 0";
echo "set feedback off";
echo "select owner || '.' || table_name from table_registry order by owner, table_name;";
} | sqlplus -s sde/$PASSWORD@$DBMS_INSTANCE > $LAYERLIST;
echo 'Processing' $(cat $LAYERLIST | wc -l ) 'layers...' | tee -a $SUMMARYFILE
echo 'Any errors encountered during this process will be listed below.' | tee -a $SUMMARYFILE
echo '----------------------------------------------------------------' | tee -a $SUMMARYFILE
# Iterate through the layer list and run sdetable -o update_dbms_stats.
numLayers=0
numProcessedOK=0
for i in $(cat $LAYERLIST);
do
# Attempt to run Analyze on each layer
sdetable -o update_dbms_stats -u sde -p $PASSWORD -t $i &> /tmp/dbstats.out;
# Test for errors by grepping for the string "Error"
errorsPresent=$(grep Error /tmp/dbstats.out | wc -l)
let numLayers=$numLayers+1
if [ $errorsPresent -gt 0 ];
then
# Log the ArcSDE error messages (anything that says "Error")
grep Error /tmp/dbstats.out | tee -a $LOGFILE $SUMMARYFILE
else
# Log the last line of ArcSDE dbms_stats command output
# as our indication of success
tail -n 1 /tmp/dbstats.out >> $LOGFILE
let numProcessedOK=$numProcessedOK+1
fi
done
echo '----------------------------------------------------------------' | tee -a -a $SUMMARYFILE
echo 'Done.'
echo $numProcessedOK layers out of $numLayers analyzed OK. | tee -a $SUMMARYFILE $LOGFILE
echo 'Analyze operation completed' $(date) | tee -a $SUMMARYFILE $LOGFILE
if [ "$SEND_EMAIL" == "true" ];
then
SUBJECT="$SERVER_NAME Analyze log for $(date +%F)"
EMAIL=$ADMIN_EMAIL
EMAILMESSAGE=$SUMMARYFILE
/bin/mail -s "$SUBJECT" "$EMAIL" < $EMAILMESSAGE
fi
___________________________________________________ Seth Peery Senior GIS Architect, Enterprise GIS Virginia Tech Geospatial Information Sciences Virginia Polytechnic Institute and State University
... View more
04-15-2011
11:59 AM
|
0
|
0
|
355
|
POST
|
I manage an ArcSDE installation at a large US research university in which feature class data for a single SDE instance is distributed across several dozen Oracle schema owners, enabling some measure of decentralized responsibility for data management within each of my academic and administrative departments. I would like to improve my workflow for the DBMS statistics updates ("Analyze" from ArcCatalog) that need to happen before and after I execute a compress on the database. I'm wondering if anyone here has had any luck running sdetable -o update_dbms_stats ...in a batch operation for all feature classes in the table registry? It is prohibitively time-consuming to run this command on the hundreds of feature classes that exist across all the departments' ArcSDE accounts - but if I understand the command reference, this command expects a '-t' argument specifying an individual table. I could imagine wrapping this command in a shell or python script that would open grab a list of all the feature classes known in the repository across all schemas in the Oracle instance (via SQL querying table_registry? by parsing the output of one of the command-line utilities?), then iterate across that list, and run 'analyze' for each. Is there something I'm missing here? This seems to be an awfully involved procedure to accomplish something that the documentation suggests needs to happen weekly, before and after each compress. I cannot rely on each individual data-owning department to right click each of their feature classes and select "Analyze" at a time that is convenient for the Enterprise GIS division, so I need some way to automate this and perform analyze operations database-wide, just like I do for compress. The complexity of my proposed solutions suggest to me that I am missing something obvious and simple, which I hope to be the case. If the price to pay for improving transaction efficiency on ArcSDE is to temporarily become the object of ridicule on a forum for overlooking the obvious, sign me up. Thanks in advance for your help! ___________________________________________________ Seth Peery Senior GIS Architect, Enterprise GIS Virginia Tech Geospatial Information Sciences Virginia Polytechnic Institute and State University
... View more
03-16-2011
02:06 PM
|
0
|
4
|
574
|
Title | Kudos | Posted |
---|---|---|
4 | 02-10-2023 10:15 AM | |
3 | 02-10-2023 09:38 AM | |
3 | 12-11-2018 10:11 AM | |
6 | 11-10-2016 12:56 PM |
Online Status |
Offline
|
Date Last Visited |
2 weeks ago
|