DOA generated queries to the Oracle Data Dictionary

Helene

Member³
We have a serious problem with execution times for some "system generated" queries against the oracle data dictionary. Particularly this one is heavy:

"select constraint_name, column_name from sys.all_cons_columns where owner =
redface.gif
wner and table_name = :table_name and constraint_name not like 'SYS_C%' order by constraint_name, position"

This is executed quite frequently (app. 1200 times in 2 hours), and costs 11.200 consistent gets on each execution !

We are not using the EnforceConstraints property. I have checked all pas and dfm-files throughout the project, and it is always set to false.

The query seems to be generated from a data module in which we have some oracle datasets, connected via the oracleprovider to client datasets. All are set to readOnly. I have tried to change a number of different properties in order to get rid of this problem, but I cannot find any way to achieve this. I can see that some of the SQL is generated in the PSGetIndexDefs function. This function is also bad with regards to number of network roundtrips. It generates one new network roundtrip for each constraint instead of reading information about all interesting constraints for one table in one roundtrip.

Marco, I don't know if you remember that I have been in contact with you a quite long time ago regarding this problem. Do you have any new information about this? If I understood you right back then, you would look into the problem so that we could get rid of these unnecessary (?) select's.

With regards
Helene
 
This query is executed when a table's constraints are fetched from the dictionary. In a 3 tier application this is necessary to get the indexes, key fields, default order, and so on. This should however only occur once for a single table for the duration of a session. If this query is executed 1200 times for, let's say, 12 different tables, then this would imply that the application has made 100 connections. Does this math apply to your application?

I tested the query on my development databases, and though results may be different on databases with lots of dictionary data and on slower networks, this took 20 seconds for 1200 queries. This a very small fraction of 2 hours of course. Note that it consumes 1 network roundtrip for all constraints of the table, and not for each contraint.

How much time does the query cost on your configuration?

------------------
Marco Kalter
Allround Automations
 
OK, so it is not possible to get rid of these SQL queries when using a client dataset with a provider.

In this case both the dataset, provider and client dataset are actually on the client, which will explain why the number of executions is so high... The reason we have done this is to ensure that all the data is kept on the client when it is fetched from the database. I suppose the easiest way to solve this then will be to drop the use of the client dataset.

Maybe this is only based on a misunderstanding? Will the oracledataset be as effective as the client dataset for searching (locate)? Will it keep the data on the client once the data has been fetched from the database?

With regards
Helene
 
As far as I know the TClientDataSet will perform the search operations locally, so this should work as expected. I still wonder if the numbers I mentioned in my previous reply match your situation though.

------------------
Marco Kalter
Allround Automations
 
Assuming the ClientDataSet ignores IndexDefs, are these queries really necessary ?

My system is 3-tier and stateless, so each time a dataset is requested - new OracleSession and OracleDataSet components are created. The query is executed and then the components are destroyed.

The constraints query adds significant overhead. Of all the queries in the system, it is the #1 worst database query in terms of "buffer gets" - the number of blocks sent over the network from the database, each block being 8K.
 
I am sorry I haven't answered your question, Marco, I have been - and still am - a bit too busy to look more closely at the actual numbers. The math will vary from one customer site to another anyway. I think part of the problem is the use of the All_ dictionary tables, which is of course necessary since the table can be owned by another user (schema) than the one currently accessing it. I see there are also others having a problem with these queries being among the worst in terms of the number of consistent gets required. What I have noticed is that this got very much worse after we introduced a number of views into the database, where we had previously only tables. Basically there are two views pr table, and the number of tables owned by the actual user is app. 220. There are about 890 indexes. These numbers will increase.

We have tried to run some of Oracle's catalog scripts, which seems to help, but only for some time.

Should be run from a restricted session, with no other users accessing the database (which is also why we are not so happy with this 'solution')
@catalog
@catproc
@catexp
@catoctk
@caths
@catldr
@dbmspool

When I tried to do some debugging into your code, it seems to me that part of this is triggered from the TDataSet. However, I guess that the actual SQL used to retrieve the information from the data dictionary is DOA code? One possible way to handle this could be to allow for customization of this SQL code. This way we could add redundant tables for this information, over which we could have a much larger degree of control. I don't like this idea very much, since it also means you need an extra step after adding/changing tables/indexes, but we do need a solution for this problem. If the SQL as it is performed now is kept as the default, then only those experiencing serious problems with it could do this extra job.

With regards
Helene
 
I commented out most of the code in TOracleTableInfo.GetConstraints, and my application seems to work fine.

It is possible that some problems may be caused by this that I wasn't able to produce or anticipate.

I should note that my ClientDataSets ignore IndexDefs.
 
Interesting!
That leads to two new questions: You commented out "most of the code". Which parts of the code did you not comment out?

And for Marco: Could this lead to other problems, or could it be a viable solution also for us?

With regards
Helene
 
I have had some similar performance problems when querying data dictionary views like ALL_TAB_COLUMNS. For example, on our production database, a query to return data on 16 columns for one table takes 2.8 - 3.2 seconds. This occurs whether I use DOA components or query out of SQL*PLUS or any other tool like PL/SQL Developer. On our development machine which has a little less data but is a much slower machine the query takes about 0.3 seconds.

I have discussed this with Oracle Corp. and they said they thought 2.8 - 3.2 seconds was appropriate for this query. Part of our problem is that there is no restriction on this system as to who can run DBA processes. Someone had analyzed most of the data dictionary tables which must have caused part of the problem. I have removed the statistics, but the problem persists.

I have also discussed this with an Oracle DBA and he suggested that it may be time to recreate the database. His feeling is that there are some issues with he data dictionary that may or may not have anything to do with the analyzing of the data dictionary tables. He feels that at this point nothing short of doing a full export and recreating the database will fix this problem. His feeling is that this is something that should be done every couple of years as a standard DBA maintenance practice to allow for peak system performance.

Have any of you with this problem tried to run these queries in SQL*PLUS and see how long they take? This may be an issue with the state of your data dictionary and not an issue with the DOA components. It may make your app run faster by preventing the DOA components from running all these queries, but you may have a persistent problem in your database which needs attention.
 
Helene, I only left what's necessary for the workflow. The function really doesn't do anything:

Here is what I had in original:

procedure TOracleTableInfo.GetConstraints(DataSet: TOracleDataSet);
var ThisConstraint: string;
C: TOracleConstraintInfo;
begin
// Already read?
if OracleConstraintInfoList nil then Exit;
OracleConstraintInfoList := TCollection.Create(TOracleConstraintInfo);
// Open a query to select all constraint columns for this table
with Query do
try
Debug := False;
Clear;
SQL.Add('select constraint_name, column_name from sys.all_cons_columns');
SQL.Add('where owner =
redface.gif
wner and table_name = :table_name');
if not DataSet.Session.POLite then
SQL.Add('and constraint_name not like ''SYS_C%''');
SQL.Add('order by constraint_name, position');
DeclareVariable('owner', otString);
DeclareVariable('table_name', otString);
SetVariable('owner', TableOwner);
SetVariable('table_name', TableName);
Execute;
// Loop through all constraint columns to find constraints and their columns
C := nil;
while not EOF do
begin
ThisConstraint := Field('constraint_name');
if (C = nil) or (C.ConstraintName ThisConstraint) then
begin
C := TOracleConstraintInfo.Create(OracleConstraintInfoList);
C.ConstraintName := ThisConstraint;
C.ColumnNames := TStringList.Create;
C.OracleTableInfo := Self;
end;
C.ColumnNames.Add(Field('column_name'));
Next;
end;
except
OracleConstraintInfoList.Clear;
raise;
end;
end;

Here's what's left of it:

procedure TOracleTableInfo.GetConstraints(DataSet: TOracleDataSet);
begin
if OracleConstraintInfoList = nil then
OracleConstraintInfoList := TCollection.Create(TOracleConstraintInfo);
end;
 
Back
Top