POST
|
I thought I would run it once before the table is closed (= the time when by experience the size of the geodatabase on the disk grows) Compact is an operation on the Geodatabase object, and it does not matter if the table is open or closed. As I explained before, the table does not increase in size when you close it. The file size increases because of inserts or updates. But, there can be a delay in in updating the file system metadata to reflect the newly increased size. When the file is closed, that process is accelerated.
... View more
02-03-2014
02:51 PM
|
0
|
0
|
404
|
POST
|
Compact does not require that the Table or Geodatabase be closed first. Compact can take a significant amount of time to run, depending on how much fragmentation is present and how big the file is. Because of that, you might want to minimize the frequency of using it. It is similar in some respects to running a disk de-fragmenter. Since it takes a lot of time, you might not want to run it several times a day.
... View more
01-30-2014
09:01 AM
|
0
|
0
|
1107
|
POST
|
Thank you for the repro case. What you are seeing is not at all a bug. It is the expected behavior when you perform a large number of updates to your data. Additionally, the same thing happens in ArcGIS. The reason that the files become very large when you do a great deal of updates is that the update process often creates internal fragmentation in the files. Usually, this happens when an updated record requires more storage than it did before. When this occurs, the record must be written out to a new location in the file, and the previous location is marked as deleted or free space within the file. All of these deleted records are kept track of in the "freelist" file. There are a couple of ways that the free space can be recovered. You discovered one of them, which is deleting a field. When a field is deleted, the entire file is rewritten one record at a time, but the deleted records are skipped. The better way to recover free space is to run the Compact Database command in ArcGIS. You can do that from the database property page, or from a GP tool. In the FileGDB API, we did not expose CompactDatabase functions in the original design. However, we are in the process of creating an updated release with many enhancements and bug fixes. We will add CompactDatabase functions as part of this new release. Our plan is to have the new release ready around the time of the Developer Summit in March.
... View more
01-23-2014
09:32 AM
|
0
|
0
|
1107
|
POST
|
I think that what you are seeing is how the file system works on Windows. If you open a file, and write data into it, the file is obviously larger than it was before. One would naturally assume that if you checked the size of the file that you would see that the size had changed. The confusing thing is that Windows does not guarantee that the file system metadata will be flushed until the file is closed, and even then, there might be a delay until the new size is visible.
... View more
01-22-2014
12:59 PM
|
0
|
0
|
1107
|
POST
|
There is no getting around the requirement of knowing the name of a table before opening it. But, there is a way to find out the names of all the tables in the database. You need to call the GetChildDatasets function on the Geodatabase class. It returns a vector of strings containing the table names.
... View more
12-30-2013
10:05 AM
|
0
|
0
|
624
|
POST
|
There was some discussion of this last year. Some preliminary work was done to investigate the feasibility of that, and it looked very encouraging on the iOS side. All indications were that it would "just work". Android, with its many deficiencies and inconsistencies, was another thing entirely. The idea was put aside for the time being, and there has been nothing further since then.
... View more
09-18-2013
09:27 AM
|
0
|
0
|
151
|
POST
|
I believe you will find that SELECT DISTINCT has worked all along. There are no immediate plans to implement SELECT TOP. I am not convinced that it is particularly useful. If you want the equivalent of SELECT TOP 100 * from xxx, why not just keep track of how many rows you have read in and then stop after you hit 100?
... View more
09-11-2013
07:46 AM
|
0
|
0
|
446
|
POST
|
Your screwed. fgdb's support an extremely limited set of subqueries that are under the restrictions I mentioned and cannot do what you can do with personal geodatabases and SDE databases, which support full subquery functionality. Fgdb subqueries can only return one value from an entire table and the table providing the subquery value (MAX, SUM, MEAN, etc) must be external to the table where you are making the selection based on the subquery value. Nothing can make the approach your code is attempting work with an fgdb. This is not correct. A scalar sub-query in FileGDB can reference the same table. I do this all the time, and it was written specifically for this purpose. For example, if my table is ROADS, I can execute the following sub-query in a WHERE clause: OID = (SELECT MAX(OID) FROM ROADS) Could you provide some specific examples of the things you have tried that do not work as you would like?
... View more
08-08-2013
11:03 AM
|
0
|
0
|
406
|
POST
|
The most likely explanation is that the variable _colName has not been set or has a typo.
... View more
07-29-2013
02:03 PM
|
0
|
0
|
198
|
POST
|
When you want to insert or update a shape column using the FileGDB API, you must use one of the ShapeBuffer helper classes when you call the Row::SetGeometry function. You should not use the ByteArray class. We ship sample programs which illustrate the use of the several of the ShapeBuffer derived classes. There is also extensive documentation of the shape buffer format included with the API. It is critically important that you understand the shape buffer format. This is essentially the same format as the shape buffers used in shape files, but with some Geodatabase extensions (e.g., curves, multipatches, etc.). The API does not provide a way to directly use or convert other shape buffer formats to that which is required by the API.
... View more
07-19-2013
09:20 AM
|
0
|
0
|
271
|
POST
|
You need to put Esri.FileGDBAPI.dll and FileGDBAPI.dll in the same folder as your executable. If you examine all of the sample programs that we supply, you will see that there is a post-build step in each project which performs this copy operation. A good starting point for your own applications is to copy and modify one of the sample projects so that you don't have to figure out how to do this yourself.
... View more
07-19-2013
09:08 AM
|
0
|
0
|
716
|
POST
|
For your XP users, why not stick with the VS2008 or VS2010 version of the API? Is it really critically important to use VS2012?
... View more
07-12-2013
01:47 PM
|
0
|
0
|
234
|
POST
|
Using Windows Explorer to copy your data while it is in use by another process is very risky. There is a very good chance that you will end up corrupting your data in some way. The problem is that any files that are open might not be copied as expected and you will end up with files missing in the new folder. It used to be the case that Windows Explorer would give you an error message that there was a sharing violation and the copy would stop. But that behavior was changed in Vista and Windows 7. Now, it quietly copies the files it can, and skips the files that cannot be copied. I hope you have good backups! It is also extremely risky to go into the database folder and delete files that are there. Please make backups first!
... View more
06-26-2013
09:35 AM
|
0
|
0
|
1399
|
POST
|
I don't think the existence of .FREELIST files has anything to do with your problem. However, the way to get rid of .FREELIST files is to run the Database Compact command. It would be a bad idea to simply delete them, as you will never be able to recover the unused space in your datafiles. If you are getting an error message that another process is accessing your dataset, that can be considered to be a reliable message. Why are you so sure that no other process is active? The way to know for sure is to go into the database folder and look at any .LOCK files that are present. In the lock file names, you can see the name of the locked dataset, the type of lock, and the process and thread ID which owns the lock. You will probably find that some other process does in fact have the dataset open.
... View more
06-25-2013
08:59 AM
|
0
|
0
|
1399
|
POST
|
It would mean using a separate thread for each table. That is under your control. I am not sure this would work, but it's much more likely to work this way than it would if the same table was accessed by more than one thread. This also assumes that whatever processing is happening is not making any use of LibXML in more than one thread at a time. The main use of LibXML is when the API is doing anything related to schema, e.g., creating a new table, describing an existing table, adding or removing a column, etc. All of those operations involve the use of XML. But XML usage in minimized if all you are doing is opening a table and getting its rows.
... View more
06-19-2013
12:37 PM
|
0
|
0
|
182
|
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|