Quantcast
Channel: Forums - Geodatabase & ArcSDE
Viewing all 1588 articles
Browse latest View live

Performance Issue (I/O bound) when processing geometric network

$
0
0
I have two machines in local area network: one installed with enterprise geodatabase (running ArcSDE service), and the other machine as a client. The client machine connects to the enterprise geodatabase, retrieve feature classes and process the geometric network.

The application basically goes through each edge of the geometric network and retrieves their start and end junctions, does a little processing and then insert the information to a local Oracle database on client machine.

As I observe, the client machine runs constantly under 10% CPU usage. So it seems to be an I/O bound issue. I tried multi-threading which creates several or lots of connections to SDE server, and tried to trieve data in parallel. But it doesn't do much better and CPU usage is constantly under 15%.

Here are some code snippet for processing the data (in VB.NET):

Private featureWorkspace As IFeatureWorkspace
Private netColl As INetworkCollection
Private geometricNetwork As IGeometricNetwork
Private netElements As INetElements
Private netTopology As INetTopology

netColl = featureWorkspace.OpenFeatureDataset("xxx.DISTRIBUTION")
geometricNetwork = netColl.GeometricNetworkByName("xxx.DISTRIBUTION_NET")
netElements = geometricNetwork.Network
netTopology = geometricNetwork.Network

'Get the edge elements
Dim elements As IEnumNetEID = netElements.GetEIDs(clsid, objid, esriElementType.esriETEdge)
elements.Reset()

For i As Integer = 1 To elements.Count
Dim elementClassID, elementOid, elementSubid, fromJcnEid, toJcnEid As Integer
Dim edgeEid As Integer = elements.Next()

netElements.QueryIDs(edgeEid, esriElementType.esriETEdge, elementClassID, elementOid, elementSubid)
netTopology.GetFromToJunctionEIDs(edgeEid, fromJcnEid, toJcnEid)

'individual segment and it's to/from node geometries
Dim segmentGeometry As ICurve = geometricNetwork.GeometryForEdgeEID(edgeEid)
Dim fromPoint As ESRI.ArcGIS.Geometry.IPoint = geometricNetwork.GeometryForJunctionEID(fromJcnEid)
Dim toPoint As ESRI.ArcGIS.Geometry.IPoint = geometricNetwork.GeometryForJunctionEID(toJcnEid)

Next

Does any one know I can overcome the I/O bound issue? Anything I can configure on ArcSDE server to allow faster mutiple connections? Or any code improvement in above can boost performance?

Thanks!

Performance Issue (I/O bound) when processing geometric network

$
0
0
I have two machines in local area network: one installed with enterprise geodatabase (running ArcSDE service), and the other machine as a client. The client machine connects to the enterprise geodatabase, retrieve feature classes and process the geometric network.

The application basically goes through each edge of the geometric network and retrieves their start and end junctions, does a little processing and then insert the information to a local Oracle database on client machine.

As I observe, the client machine runs constantly under 10% CPU usage. So it seems to be an I/O bound issue. I tried multi-threading which creates several or lots of connections to SDE server, and tried to trieve data in parallel. But it doesn't do much better and CPU usage is constantly under 15%.

Here are some code snippet for processing the data (in VB.NET):

Private featureWorkspace As IFeatureWorkspace
Private netColl As INetworkCollection
Private geometricNetwork As IGeometricNetwork
Private netElements As INetElements
Private netTopology As INetTopology

netColl = featureWorkspace.OpenFeatureDataset("xxx.DISTRIBUTION")
geometricNetwork = netColl.GeometricNetworkByName("xxx.DISTRIBUTION_NET")
netElements = geometricNetwork.Network
netTopology = geometricNetwork.Network

'Get the edge elements
Dim elements As IEnumNetEID = netElements.GetEIDs(clsid, objid, esriElementType.esriETEdge)
elements.Reset()

For i As Integer = 1 To elements.Count
Dim elementClassID, elementOid, elementSubid, fromJcnEid, toJcnEid As Integer
Dim edgeEid As Integer = elements.Next()

netElements.QueryIDs(edgeEid, esriElementType.esriETEdge, elementClassID, elementOid, elementSubid)
netTopology.GetFromToJunctionEIDs(edgeEid, fromJcnEid, toJcnEid)

'individual segment and it's to/from node geometries
Dim segmentGeometry As ICurve = geometricNetwork.GeometryForEdgeEID(edgeEid)
Dim fromPoint As ESRI.ArcGIS.Geometry.IPoint = geometricNetwork.GeometryForJunctionEID(fromJcnEid)
Dim toPoint As ESRI.ArcGIS.Geometry.IPoint = geometricNetwork.GeometryForJunctionEID(toJcnEid)

Next

Does any one know I can overcome the I/O bound issue? Anything I can configure on ArcSDE server to allow faster mutiple connections? Or any code improvement in above can boost performance?

Thanks!

SQL View Performance Issue

$
0
0
Hi- I have two databases set up on a test and a production server. Both are running SQL Server 2008 SP2. Both of th10.1 SDE databases have been configured identically to one another and contain 2 tables (A & B) and a related view that selects all the data from A or B. The view definition is switched depending on which table has been updated last. The tables have a Geometry field and have been registered with SDE.

On the test database I am able to add the view to the map and performance is great when identifying points. On the production server the performance is terrible when identifying points. Comparing the query traces shows that the production DB is clusetered index scan while the test DB is actually performing a much more effecient clustered index seek. I am not able to identify why the difference. I also notice that the test DB is executing select statements on the lineage_name, state_id, and lineage_id tables and the production DB is not. Any ideas?

Update: I just looked at the SDE_States & SDE_State_Lineages tables to compare test to prod. The test DB has these tables populated with many recods while the test DB only has 1 record in these tables.

Shapefile

$
0
0
Hi

I have uploaded a CSV file to arc map with all my relevant data i need then joined it to a shapefile for geographical reference. I now need to export it as a shapefile as i'm going to be using it in Geoda. I can do this but when I open it up in Geoda and also in ArcMap it has renamed my column titles to 1, 2, 3, 4, etc instead of keeping the original names.

Does anyone know how to get the original column titles back and kept within a shape file?

Thanks

Permissions to create or upgrade a geodatabase in Oracle

$
0
0
http://help.arcgis.com/en/arcgisdesk...0000002v000000

According to the documentation on privileges

ALTER ANY INDEX
CREATE ANY INDEX
CREATE ANY TRIGGER
CREATE ANY VIEW
DROP ANY INDEX
DROP ANY VIEW
SELECT ANY TABLE

These permissions are needed for the creation and installation of ArcSDE geodatabase.

Is it possible that For those *_any_* privileges, can we limit access? (removed all the any so as to restrict access only to own schema)?

Restore an ArcSDE database backup to a database with a different name?

$
0
0
I am using ArcSDE 10.1 and PostgreSQL 9.1/PostGIS 2.0. I have backups of a production database that I'd like to use for a development database. The name of the development database appends "_dev" to the database name to avoid confusion. However, when I restore the production backup to the development database, I receive the following error on connection:

Code:

2013-03-05 08:07:28 PST ERROR:  cross-database references are not implemented: "cvag.sde.sde_object_ids" at character 21
2013-03-05 08:07:28 PST QUERY:  SELECT base_id FROM cvag.sde.sde_object_ids WHERE id_type = i_id_type FOR UPDATE
2013-03-05 08:07:28 PST CONTEXT:  PL/pgSQL function "sde_get_primary_oid" line 15 at FOR over SELECT rows
2013-03-05 08:07:28 PST STATEMENT:  SELECT cvag_dev.sde.SDE_get_primary_oid ($1,$2)

Which leads me to believe that one or more functions in the sde schema is defined with database names as a reference and restoring to a database with a different name doesn't work.

I have also tried creating the enterprise geodatabase first and then restoring a backup of the production database that excludes the 'sde' schema. The connection works, but now my 800+ feature classes are not registered with the geodatabase.

Are there any best practices published for working with production and development databases that might help me here?

Thanks.

Geodatabase Replication - Feature class source file path

$
0
0
Hello All,

My company is looking into using geodatabase replication to manage our base data between our 3 office locations. We create many figures (mxds) for our projects with this base data and need the feature class source to be the same for all three locations. Does geodatabase replication allow the file paths to match - we don't want to have to relink the geodata every time a different office location opens the mxd. (We will be working with two-way replica type)

Also since this is the first time setting up geodatabase replication - any tips and tricks would be greatly appreciated.

Thanks in advance.

Georeferenced maps with point data connected to excel data.

$
0
0
I am working on a project where I am taking point data that is hand drawn onto maps and creating a database. Each point has 7 different attributes associated with it that I have entered into an excel spreadsheet but don't have a X/Y. I am planning on scanning the maps (they were created in Arc so it will be simple to georeference) and selecting the point data on the scanned images. My question is how do I link my excel data for each data point with the point that I can select in my georeferenced map? In other words, how do I structure an attribute table/geodatabase to meet my needs for this?

Updating Feature Datasets with new data

$
0
0
Hi all,
I've got a bit of a curve ball here which I was hoping you might be able to help me out with.

I have my basemap feature classes (i.e. Streets, Property Boundaries, Address points, and so on) configured with their naming conventions, aliases etc...

I've added the GlobalID, enabled Versioning as well as Archiving.

I've just deployed out first SDE GDB.

Each quarter, most of these datasets receive an update.

I want to be able to remove all features within a feature class and re-populate with the new data.
Theory being, we can see property boundaries as they were at any given point in time in relation to our underground asset.

With the exception of opening ArcMap, select all, delete from the existing and then selecting all features within the new dataset, copy and pasting the data in... Is there any easier way in which to do this?

Some of the datasets have over 3,000,000 records and i'm finding it's taking a considerable amount of time to copy the data across... That's if my system doesn't crash?

I'm tempted to run the updates on the ArcGIS server as it's closer to the SDE GDB, but I was just curious as to if anyone else has come across a similar issue and what their solution may have been?

Cheers,

Cory

Very slow performance on adding global ID

$
0
0
Hi,

we have a largish SDE-Database (various FCLs with millions of ST_GEOMETRY features) on ArcSDE 10.5, Oracle 11.2.0.3, running on WS 2008 R2 physical machine with local disk.

We're trying to add global IDs before duplicating the database with expdp/impdp and setting up geodatabase replication. We're using the ArcCatalog context menu on a feature dataset, and performance is simply terrible:

Adding global IDs to a single FCL of about 2.4 mio features takes over 2 hours. Looking at Oracle AWR, I see that the actual update statement on the FCL takes about 4 minutes of CPU time, there is no significant activity either on the client or database server, Oracle writes about 300 KB/s, the whole system, including the ArcCatalog process, is basically idle.

Any idea what's going on here?

Thanks very much, Martin

IDW not working on Spatial Views : Arc Editor 10.1

$
0
0
Hi All,

I am trying to execute IDW on spatial view(Point features- Oracle DB) but tool execution fails with following errors

Code:

000581 : Invalid parameters
 000867 : contains invalid cell size or dataset

I export the same view's data into feature class and successfully execute the tool.Raster is being generated without any error.

Is there any issue to run the IDW directly on spatial view? or need to do any additional configuration on view?

Any help would be appreciated.

Thanks,
Prashant

Raster Geodatabase not connecting

$
0
0
I created a new enterprise geodatabase yesterday for our raster images. I loaded a mosaic dataset into the geodatabase and then disconnected. When I tried to connect to the geodatabase today, it says "database connection failed". It doesn't specify a reason for not connecting. Does anyone know how to fix this? Or should I delete the geodatabase and start over? :( please let me know.

Thanks,
Julia

SDE gdb is producing state lineages despite all featueres are unreg. as versioned

$
0
0
We have an Arcsde v10 / PostgresQL v8.4 geodatabase that has been performing badly the last few weeks when using long transaction for web editing. If we switch to short transactions (unregister as versioned) the system run with good performance. Anyway the system can serve upto ap. 20 concurrent webeditors without any hickups. Despite no featuredatasets is registrated as versioned the system is still producing state lineages. How come?

The database is compressed twice a day. Autovacuum is running as default. A year ago we experienced performance problems and we were recommended to run "full vacuum" with analyze. It was a very helpfull - the database performed well again. Therefore we have done this on a weekly basis ever since. We have notised that in the PostgresQL documentation it is stated that this method can degrade performance over time because indexes not are treated the right way.

The sdegdb has been part of 2 ways replica with another sdegdb for almost ½ a year. Data has been syncronised daily by an automated process (Python scripts). One month ago the replica was unregistrated in Replica Manager by a mistake without a final synchronising - replication was not needed anymore. Data is exported every day so no data was lost.

Though it is possible to make af full Compress to state 0 - the table sde_state_lineages is still rather large - about 8 GB after the sdedb has been vacuumed. It is not possible to shrink it below this limit. When no datasets is versioned anymore and no replica version exist can this be right then? Can there be hidden system tables containing trash data. If so will it be possible to clean system? If the systemtables is not the problem what could be the problem most likely? - any suggestions.

We have tried to export/import data in the the DB. No effect.

Features in the sde Gdb: ap. 800 000 polygons

Analyze_management fails for FEATURE component type

$
0
0
ArcGIS 10.1 SP1, data is in SDE, Oracle 11.2.0.2.0
Upon loading a feature class, I have always Analyzed the feature class. Created a python script as I've done this so much.

After updating my scripts to 10.1 syntax (arcpy, <tool>_Management or <tool>_management, camel case changes, etc), I have hit a stump. Via a python script:

Analyze on the BUSINESS component works fine.
Analyze on the FEATURE component gives the following:

Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Program Files (x86)\ArcGIS\Desktop10.1\arcpy\arcpy\management.py", line 13516, in Analyze
raise e
ExecuteError: Failed to execute. Parameters are not valid.
ERROR 000800: The value is not a member of BUSINESS.
Failed to execute (Analyze).

We didn't spend long at 10.0. But I do notice a difference in the ArcCatalog GUI between 9.3.1 and 10.1. At 9.3.1, Business and Feature (as well as Raster, Adds, Deletes...) could be Analyzed separately. At 10.1, once you hit Analyze, it's off and running. Now whether or not this automatically runs STATS on each component type, I don't know. The 10.1 documentation still shows that each component type can be Analyzed (via arcpy):
http://resources.arcgis.com/en/help/...000000n2000000

In ArcToolbox at 10.1, once an SDE featureclass is selected, the options reduce to one checkbox. BUSINESS. What's up with FEATURES? Is this just not possible/needed?

Have I missed something? Anything I should know before creating and digging through into Oracle traces?
Guess I could try to create and export a model, though generally I write a script right from the beginning.

geodatabase

$
0
0
hello I am new to ArcGIS.. How do I connect to a geodatabase folder?

Changes to VERSIONS, STATE_LINEAGES, MVTABLES_MODIFIED tables in ArcSDE 10

$
0
0
Hi,

I am in the process of writing a python script that generates a summary of number of inserts, updates and deletes for every feature class updated in every version in the GDB. We intend to use this as part of our QC efforts, as we have student interns who help with the data updates.

The script does this by querying the following tables:
  • SDE.VERSIONS
  • SDE.STATE_LINEAGES
  • SDE.MVTABLES_MODIFIED
  • SDE.TABLE_REGISTRY
  • SDE.A/D TABLES

We are running ArcSDE 9.3 on Oracle 10g, and hope to upgrade to ArcSDE 10.1 in the near future.

I remember reading that some of the back-end SDE system tables have changed in 10, and am wondering if any of these tables were part of that change.

Thanks,

Sendhil

Problem using st_linefromtext on "large" linestrings in 10.1

$
0
0
Environment: Arc SDE 10.1 , Oracle 11g r2, Arcgis Server 10.1

Hi,
I have table with linestring clob column, the linestring are larger than 4000 bytes.
When I try to do a select using sde.st_linefromtext on the clob column, i get:
System.Data.OracleClient.OracleException: ORA-20004: Error generating shape from text: Invalid text used to construct geometry (-1).
ORA-06512: at "SDE.ST_GEOMETRY_SHAPELIB_PKG", line 12
ORA-06512: at "SDE.ST_LINEFROMTEXT", line 58

But it works fine when I run the same select on the same column in different environment: Arc SDE 10 , Oracle 10, Arcgis Server 10

Does Arcgis 10.1 has size limitation?

Thanks,
Ido

SQL Pg query in modelbuilder: group by field1 and select first 4rows basing on field2

$
0
0
Hi all!
I'm getting crazy about this "simple" problem:
I've got a point table (file geodatabase) with two fields: [fishnet] and [score]
I want to:
1) group points for each [fishnet]
2) sort points DESC basing on [score] field
3) select the first 4 rows for each [fishnet]
4) export in a new table/feature
all in a modelbuilder flow.

this query works perfectly in Postgres (of course without [ ]):

SELECT [fishnet], [score] FROM
(SELECT [fishnet], [score], row_number() over (partition BY [fishnet] ORDER BY [score] DESC) AS partitby
FROM table) tab WHERE tab.partitby < 4

Here is an example of my INPUT table ----> Here is an example of my OUTPUT table
[fishnet] [score] [fishnet] [score]
100 1 100 7
100 5 100 5
100 7 100 2
100 0 100 1
100 1 101 6
100 2 101 0
100 0 ........
101 0
101 6
........

Attached the SQL query in Postgres and the corresponding result:
Attachment 22432

Solution 1:
Use the above Postgres SQL query in modelbuilder flow: How can I do that?

Solution 2:
Use modelbuilder tools to obtain the same result: How? suggestions?
Thank you very much in advance!
Paolo
Attached Thumbnails
Click image for larger version

Name:	query_sql.jpg‎
Views:	N/A
Size:	99.8 KB
ID:	22432  

ARCmap 9.3.1 sp2 Window moves itself when saving a change in the database

$
0
0
Dear friends does any of u experience the same problem when u save a file in ARCmap database, the table windows move to the center of ARCmap. I think the reason is because i switch to windows 7 64 bit. With the 32 bit i didn't get this problem.

Is there a fix for this? Or can anybody help me with this issue?

Unique Indexes in ArcSDE for Oracle 11g and Registering as Versioned

$
0
0
I am using ArcGIS 10.1 SP1 and I have been trying to devise an indexing strategy for our new geodatabase. When implementing it, I am having problems making unique indexes and registering as versioned. Whichever order I use doesn't make a difference, the second tool won't allow unique indexes and registering as versioned.

I saw an older 9.3 article on this and it said you could do both if you register as versioned first, then create your unique indexes. That hasn't worked for me. Also, the ArcGIS 10 help said it doesn't recommend using unique indexes because of the potential for compress problems.

Is it a good idea to skip unique indexes and use Data Reviewer to check uniqueness later? I thought it would make sense to enforce uniqueness where it should apply.

Thanks,


Nathan
Viewing all 1588 articles
Browse latest View live


Latest Images