POST
|
Bit confused along with George but below is a solid start point specific to SQL 2012. Just find my posts there and others responses but we initially had issues with performance until we pushed on Microsoft and discovered the had recommended changes to the configuration of the underlying drives for SQL Server that once we corrected it our performance was stellar. https://community.esri.com/thread/106931
... View more
08-30-2019
04:58 AM
|
2
|
0
|
461
|
POST
|
Not sure if anyone else is the "reads ahead" sort. I was reviewing something that an ESRI employee on the database side I am familiar with sent out regarding the excitement around branch versioning. It's true there's some great things it opens up and I can see potential opportunities with using it. You can read about it all in these 2 articles: Intro to Branch Versioning Setting the Stage In my typical fashion it's not just enough to get the highlights reel and I had to go under the hood. In there you will find some very interesting gotcha's on the Oracle side. The inability to use compressed tables probably isn't ideal to some of your DBA's but is still acceptable. The inability to use Oracle's native geometry type is what will be cause for concern. If you are like us, we have a slew of other systems that aren't ESRI driven and that utilize the native SDO_GEOMETRY. While it's not a show stopper and you could run everything in ST_GEOM and convert over once you are ready to push standardized data formats across the agency, if you are leveraging Roads & Highways and already scaled out in production using the SDO_GEOM it might get interesting. I don't know if R&H will eventually leverage branch versioning or not. I don't know if ESRI intends to eventually have branch versioning work with SDO_GEOM or not. I can say it's something worth keeping on the radar and it could sway decisions you make for R&H moving forward, particularly once you decide to undertake the move to Pro which seems like the ideal time to stage yourself as best as possible for these types of long-term capabilities. Here is the article you want to review if you are getting under the hood: Register Data as Versioned
... View more
08-21-2019
11:42 AM
|
1
|
0
|
600
|
POST
|
Ah now I see the conundrum. I agree with Kyle Gonterwitz with regards to SQL Server because the general management of multiple independent databases within a single server instance on SQL is much more straight forward and easily allows for a slew of ESRI versions since they each contain their own SDE's. Oracle's managed much differently in that regard because Oracle DBA's still seem to heavily favor limited instances within a server and then utilize multiple schemas within that instance in the way that SQL DBA's would use a single database on a server. The catch always ends up being the single SDE schema limitation which bottlenecks lots of organizations flexibility in using Oracle. I can say that relative to this situation you are asking then yes, we are using independent Oracle environments. We have Development, Vendor Development, Test and Production environments for our Roads & Highway needs spread across 4 different instances. All of those instances are shared instances with multiple schemas in them except for Production is its own true independent instance. The catch is we do zero ESRI work in our Oracle databases other than R&H so the shared instances are only shared with other projects systems for things like finance or projects data. This frees our R&H implementations from any dependencies on other needs for ESRI versions. For your original question, we have seen solid performance out of our isolated Production instance at 12c despite the fact that the instance still shares a common Linux host with other instances. I can't say we've seen any issues thus far other than within our latest 10.7 efforts the ESRI permissions diagram for various users seems to be missing the fact that the role for out user accounts needs "Delete" on the LRS_Edit_Log table because we've seen errors during rec/post when that's not in place.
... View more
08-20-2019
04:25 AM
|
2
|
0
|
1686
|
POST
|
Got it, ok. Sorry can't speak to that. We are running Oracle 12c over Linux servers but all our instances are on machines shared with other instances. We've seen performance issues at points but more so in 11g. I'd expect that a dedicated machine is the dream configuration because you are isolated so managing resources or troubleshooting are much easier since there is no parsing of what instance was doing things and when. Would prefer to move to an isolated machine but it's not cost effective to do with our current Oracle environments.
... View more
08-16-2019
10:37 AM
|
0
|
0
|
1686
|
POST
|
Are you asking about technical performance related issues of the database? Or about the software's use of the database? Also, depending on your Enterprise GDB the terms database, server and instance can be interpreted different ways. When you say instance I think of an Oracle instance which means a singular Oracle environment containing multiple schemas that may or may not live on physical server resources that are shared with other Oracle instances. So are you asking about a single instance on a shared machine or are you meaning a dedicated machine?
... View more
08-16-2019
10:19 AM
|
0
|
2
|
1686
|
POST
|
Angus, I agree that hosted configs are by far the hardest but federated servers also don't make it easy. You are correct you would need an outage period and we perform those sorts of activities on off hours for those reasons. I was looking at the possible options Rex Robichaux originally listed and considering what/how they would play into what he was trying to do. Since he doesn't currently have Portal it's just an ArcServer migration and both hosting and federation aren't at play, so how did those options fit his needs. You are correct that in those scenarios you are performing the same sort of live/transactional data steps that DBA's perform and as a result you must account for that. Exactly, I've been behind load balancers since the 10.0 days but think they would have made some of my early frustrations with ArcServer in the 9.x days better. I'd highly suggest getting to that point if you are able but agree you will likely have your work cut out trying to re-route things. Currently, we are behind the F5 BigIP load balancer and as I referenced before, it makes life night and day easier and why we tend to rebuild a new each time so we can leave current systems running for usage while we fully vet all aspects of the new environment and then flip the switch when ready. As you mentioned this isn't as straightforward once hosting is involved. We've avoided hosting for several reasons mostly tied to authoritative data and data governance concerns but also because for the few activities we'd address with that sort of solution we leverage our ArcGIS Online environment. Then for Portal we will embed an entire portal server as part of a single environment silo, i.e. projects a and c are independent, unfederated vertical stacks of arcservers and web adaptors with project b having portal, federated arcservers and web adaptor but noting from a or c will ever touch project b's portal. We have such an array of needs that trying to utilize a single organization wide Portal that all our arcserver environments must go through would be crippling because you'd be pushing everything to upgrade together. Our decisions for this also tie into this conundrum because Portal uses the singular Web Context URI which means you must embed all your projects behind it and rely on web adaptor naming to address the ease of the user end having a simple, logical, easy to remember URL to get at the various things they need. Another concern for us is that organization wide usage of Portal induces a single point of failure and thus localizing portal to each projects needs ensures only system 1 goes down and not all of them. Hosting aside, I have been considering some upcoming testing of federation to see how far we can stretch things by using iRules and URI's against Portal. The idea being that we establish an outward facing URL on the F5 load balancer and then use the iRules to establish 2 separate Portals with independent set Web Context URL's that we can hot swap behind the load balancer. This in theory is feasible and we've done it with other systems just never tried it with Portal and becomes a question around how the applications (Portal/ArcServer) handle the traffic loop across the URI's. Again, if we can get that done then in theory you could migrate machines with a minimal flip of the switch through redirects for all non-hosted environments.
... View more
08-16-2019
05:00 AM
|
1
|
0
|
5209
|
POST
|
I think Join Site can work for your scenario and wouldn't foresee it causing you issues. Again, for trust reasons I'd suggest after joining the site and before pulling your original server out of the site that you go directly against the new server to do some checks that all services are in place, gp's are working, can run queries, etc. After confirming your new server is performing as expected then drop the old one out of the site. I believe you would need a second license for the new server because you will need the license during installation of your new server where it will create the new keycodes file (C:\Program Files\ESRI\License10.7\sysgen) local to the new machine. While the licenses are CPU driven, I can say I've never tried, but I don't believe a single license that supports 4 cores could be used on your old server against 2 cores and then also used on your new server against 2 cores. Something that Jonathan Quinn could maybe answer but I know many of the ESRI staff who will wave the white flag when it comes to licensing issues and point you towards your local rep.
... View more
08-16-2019
03:50 AM
|
0
|
1
|
5209
|
POST
|
Rex, I think you will find there are several ways to accomplish this and it becomes more about preference, level of trust and speed. We manage around 70 servers covering 6 major projects and supplemental pieces like ArcGIS Monitor. Coming from a government perspective where rigid documentation for auditors is required, our level of trust is moderate and our speed can slide to the right a bit to ensure extra thoroughness. As a result we never upgrade in place for OS or ESRI version but for various reasons we upgrade about every 18-24 months keeping all servers within 1 release of each other (i.e. we went from 10.3/10.3.1 to 10.5/10.5.1 and are finalizing our 10.7/10.7.1 migrations now). What's good about our lift and shift, build it over from scratch each time method is the isolation and constant re-evaluation of all aspects of the environments from database, to servers, to network, etc. keeps us continuously evaluating how we do things and looking for efficiencies or better (new) best practices. What's bad is it does initially have a learning curve and add time to projects because the repetition of routine things like setting static routes or firewall rules, etc. isn't there. Once it is there it still likely takes a little longer than some other options but not much. As for your questions: 1) Join Site - I can't say I've ever tried to use the Join Site option to leverage a network shared store for hot swapping servers at different OS levels. We've had servers go down with issues and spun up brand new one's to hot swap into an ArcServer site but the machines always had the same OS 2) WebGIS DR - This is a valid but as discussed more complicated process although it's one I don't think you'd be up against. I believe the real sticking point behind a lot of this is the need to maintain a consistent Web Context URL for your ends users between the old and new environments. This is specific to Portal configurations and without Portal in your existing environment this is likely overkill. Based on your networking expertise or available network staff the routing of traffic from your old environment to the new one should be about as easy as flipping alight switch. If you have an F5 BigIP or something similar this is a breeze and if not it can be done via DNS entries. Either way you can establish your entire new environment behind anything you want and then simply redirect traffic in 15 minutes during off hours. If your current environment is behind https://abc.myorgdomain/ then you simply build everything under https://abcnew.myorgdomian/ and can perform all your migration and testing of the new environment behind that "new" route then when it's time to switch the "new" route gets deleted and the https://abc.myorgdomain/ gets re-directed to send traffic to your fresh migrated environment. 3) Fresh install - ties into what I said we do and sort of bottom of the #2 response. New server names won't matter it's really just about managing your traffic. By content and directories I am not sure if you are referring to ESRI software components (i.e. config-store & directories) or your own application and other file needs. As a best practice for us, all our VM's have a designated storage drive (we use D:/ internally or E:/ in Azure IaaS boxes). That drive is where we store all our apps, Oracle and SQL client installs, direct ArcSever logs, and re-direct our IIS websites to. It ensures separation of system files (C:/) and our projects specific needs/installs (storage drive) across any server making it easier to troubleshoot and have people cross-trained to support. For your own files you are sort of on your own to identify what goes or gets dropped but depending on how your vendor built that app you would want to determine or reach back out to them and ask whether you have any critical file structures or hard coded entries which must be preserved, goes not just for files but URL's. As for the ESRI pieces there's likely options for migrating the ArcServer directories and config-store folders since your initial move is staying at the same 10.4.1 version. Never had a need to do this mirrored lift and shift before so can't point you to a solution there but know there'd be steps for sure due to entries in the config-store. This is where we bite the bullet on time building from scratch. Anything we publish gets stored on a network drive in it's original MXD, Toolbox, etc. and from that we create and also store the .sd drafts for each. This allows for rapid re-publishing if we decide to delete a service for any reason but then when we do these migrations we can quickly and easily update things to use new databases (we generally push SQL version upgrades during migrations) with new SDE versions, and publish everything from scratch into the new environment. Eliminates the need to carry over the ArcServer files but also helps identify any issues with configurations, networking, publishing, etc. 4) Not seen that forum post before or fact it was a year ago I've just forgotten it. So this really speaks to an area of dissent and opinion around Portal, in particular federation. I won't go into my various gripes about it as they are long but leave it at, in environments our size it's only going to be used where/when it absolutely must be used and stay contained within that particular projects/environments space. We have several project environments doing exactly what you are doing with regards to a custom app and I'd say it's completely acceptable to do using an un-federated environment. I do like the scripted idea Andrew laid out and may take a look more into it although we often make changes to some parts of this so goes back to our slow and methodical steps and documentation allow for consistency and full review.
... View more
08-15-2019
08:52 AM
|
5
|
2
|
5209
|
POST
|
We do have fGDB's out on the shares people do edits and GP's against but it's all super granular locked down within NTFS. Use a series of AD groups to completely hide things away from all but a select few, to something wide-open for all in the organization to view. So many different workflows and options that may dictate what we do with 1 fGDB over to the next. Yes in general, 90% or more of activities we do are in enterprise GDB's.
... View more
07-26-2019
11:16 AM
|
0
|
0
|
889
|
POST
|
I'd agree with Lance. Have done similar workflows and in the same manner. No lock files to date inside fGDB's just enterprise GDB's.
... View more
07-26-2019
10:35 AM
|
0
|
3
|
889
|
POST
|
I can say that I haven't worked with shapefiles or personal geodatabases (pGDB) in a long time. Primarily because I only work with shapefiles if I have a system that dictates its the only or sometimes easiest way to get data to that system. pGDB's I am sure there are maybe 3 workflows under which ESRI recommends them as the best practice to use. We strictly use File Geodatabases (fGDB) and I will do that even for something as simple as 1 data set. Storage is so cheap these days that any slight amount more it takes up is irrelevant and fGDB's have been extremely reliable with the ability to leverage a large amount of the geoprocessing (GP) tools we'd invoke against an enterprise geodatabse (eGDB). For a variety of reasons we have a slew of fGDB's that live out on share networks that we use. I'd say this really becomes more of a workflows situation. Our fGDB's get used for odds and ends stuff and a lot for verification. They aren't used as production sources nor are they utilized for routine or scripted processes. It sounds like this may be production level, transactional type work you are doing. If that's the case and you don't have an eGDB then I'd agree with using SQL Express as the next best option and in your case until SQL Express is available then a fGDB is the best option. ESRI is correct, fGDB's are not intended for multi-user editing type workflows. While you can have multiple users inside a fGDB at once you do run some risks of creating issues once those multiple users start touching the same data sets. I can't say I have ever heard that simply storing the fGDB on a share drive carries risks for corruption. The only theory I can provide on that would be based on a networking aspect. If you are mid edit or a GP is running and there's a significant network lag/spike issue or temporarily drops then I could see how this situation could cause orphaned processes that create issues or corruption. That's nothing that can'y be accounted for and addressed by doing something simple like a robocopy backup of your fGDB each night or if you don't have access to things like Windows Task Scheduler can be done via python or Model Builder options against an ArcServer. You are just essentially taking on some of the routine maintenance type tasks database administrator's (DBA) take on.
... View more
07-26-2019
07:43 AM
|
1
|
1
|
4824
|
POST
|
Naw, mo pipedream it's happening and coming. I can say back in 2011 we worked with a defense contractor called SAIC and had already been doing automated feature extraction then and were busy trying to resolve issues like dividing individual dwellings from clusters of mud huts and lean-to's. Recent ESRI article actually hits on a road network example. https://www.esri.com/about/newsroom/arcwatch/where-deep-learning-meets-gis/?adumkts=branding&aduc=email&adum=list&aduSF=newsletter&utm_Source=email&aduca=arcwatch&adut=301117_ArcWatch_June_2019 The SharedStreet's ID is interesting and almost mirrors concepts being used to address long-standing headaches with identity management in enterprise systems.
... View more
06-24-2019
10:39 AM
|
1
|
1
|
1070
|
POST
|
I know TensorFlow has been used for some imagery classification work but unsure if it's the best for road network extraction. Have seen examples of road network extraction before but wonder if any of the API's would allow for such a workflow to run because often those basemap services don't allow much in the way of advanced processes. Having to do some sort of image capture and then ortho the images yourself first would be tedious and probably more cumbersome than other manual options. Sorry got the gears going now and already thinking out the options and paths to try and undertake this.
... View more
06-24-2019
09:42 AM
|
2
|
3
|
1070
|
POST
|
Like this direction but my immediate frustration would go back to some of the update workflows. I almost foresee some fun ML/AI workflow run routinely to calc differences where you look for change (date myself with imagery analysis here but red = fled and blue = new) between LRS and the various sources then submit updates for both new and retired data. The inverse could be done at various upper echelons to assist say State identifying new/retired county, township, etc. routes based off what's found in an OpenStreetMap, etc. and not yet pushed up or been digitized into state's data.
... View more
06-24-2019
09:28 AM
|
2
|
5
|
826
|
POST
|
Unfortunately, the only interaction to date they have told me they allow is via registered users within their Google Maps application. Sadly, it was easier to get licensing approvals for use of Google Maps on a television broadcast. Why I mentioned before I loathe it. Essentially, login to Google Maps > click the menu on left side > toward bottom click Send Feedback > select the correct option for "Missing Place", "Missing Road" or "Wrong Information" and follow the workflow. It's a single edit workflow so you will rinse/repeat this a troubling amount of times for each edit you want to make. Additionally, it's a point driven workflow so if you need to adjust a route or add a route you will identify a single point by clicking on the map and then have to spell out the linear correction in the comments of the edit. This is why it happens with the partial errors that you need to re-identify the location and re-do the comments to detail what they got wrong from your initial submission but to date can say I've never had to make more than 2 submissions. As a final point, the only upside to this I have found is the workflow forces you to see their maps/imagery and in cases where you are adding a missing route I can say that if it's not in their imagery it becomes a challenge and I've found it's easier to wait until the imagery shows the new road before submitting the correction. I've spoken with several people at Google over at least the last 8 years and they all say there's no bulk update option (i.e. send them a KMZ or something OGC compliant, etc.) and this is the only current prescribed workflow for getting this done. Rough but necessary in some cases.
... View more
06-19-2019
09:29 AM
|
1
|
1
|
1840
|
Title | Kudos | Posted |
---|---|---|
1 | 08-30-2017 03:41 AM | |
1 | 08-10-2016 04:30 AM | |
1 | 08-10-2016 06:28 AM | |
1 | 02-28-2017 07:54 AM | |
1 | 06-28-2016 11:52 AM |
Online Status |
Offline
|
Date Last Visited |
03-07-2022
02:41 PM
|