Religion Wars in Lilliput
When you have to decide between tables and queries...
It's not wise to blindly trust in so called "common sense". A question has divided for long time the community of Delphi programmers: do we have to use tables or queries, when accessing SQL database servers? Opinions are divided, and I'm afraid that the latest polls would give queries as the current winner. I will try to show here that, when it comes to navigation, queries are the loser's choice.
Big-endians and little-endians cannot both be right, and even they both can be wrong. What's the truth behind the iron curtain of the BDE implementation for tables and queries?
The pros and cons of navigation
Let me be more accurate: the real enemy for client/server developers is not the TTable component, but TDBGrid. Traditionally, C/S applications have been provided with very simple UI. Take a look at an ATM: do you see grids or navigators? I must admit that this is a very ugly interface, but it is the interface needed for a kind of applications that impose strict demands on network traffic.
Grids give the users the illusion that they are moving on a local copy of the whole data file, though what they really have is a narrow and moveable window of about 20-30 rows at a time. A very naive implementation of browsing would be fetching all the records from the source. Indeed, I have heard some programmers say that tables behaves this way; I'll prove later that they are wrong.
So, unrestricted navigation is very inefficient, specially when large data sets are involved. Can you always avoid this operation in your applications? If so, you don't need to read the rest of this paper. But if you are forced to use browsing on a large result set for some reason, it is better for you to learn the most efficient way to implement navigation with Delphi tools.
The lethal weapon
SQL Monitor output will be the touchstone that will confirm the veracity of our assertions. As we all know, this tool can be executed from inside the IDE, or as an independent program. We will execute it using the Database|SQL Monitor menu command. SQL Monitor gives us a trace of all SQL statements sent by the BDE to the database server. Moreover, it even shows the native interface calls to the client library that directs these statements to the server. We can also watch data sent to and received from the database.
We will need a very simple application in order to test the basic navigation facilities offered by Delphi. This application will connect to a table located in a database server, and it will browse this table in a record-by-record fashion. We won't use a database grid to avoid obscuring the SQL Monitor output with all the commands needed to fill a whole page of records. I also suggest you to use InterBase to play the role of the database server, for two major reasons. First of all, it is the least common denominator for all Delphi & C++ Builder programmers, including those using the Professional version. And the second one is that InterBase generates a cleaner output with SQL Monitor. A test performed with Oracle, by instance, would have to adjust the ROWSET SIZE parameter from the BDE configuration. If we don't, we will see Oracle sending 20 rows every time we request a single record. Nevertheless, I have experimented with Oracle and MS SQL Server, and the guidelines of the BDE implementation are very similar. Trust me.
So, let's start a new application, and drop a TQuery component inside the main form. Connect its DatabaseName property to some SQL alias, say, IBLOCAL. Then type the following statement inside the SQL property:
select * from Employee
Now, drop a TDataSource component, and link its DataSet property to the query. Add also a DBNavigator, and set its DataSource. Double click the query, and add all fields to the application. Select them all, drag these fields and drop them inside our form, in order to create DBEdit components automatically. Finally, set Active to True for the query.
Let's go to the Database menu, from the IDE, and execute the SQL Monitor command. Then run the application and see what's happening below the quiet surface of the BDE.
The bad behaviour of queries
If you skip all the noise representing the way BDE instructs the InterBase API to communicate with the server, you'll be left basically with those entries starting with SQL Prepare and SQL Execute. And, in this case, you will only see an initial execution of the same statement we included in our query:
SQL Prepare: select * from Employee
SQL Execute: select * from Employee
SQL preparation is slang for "query compiling". If a query contains parameters, you could compile it once, and execute it several times with different parameters. Query execution, on the other hand, opens a database cursor on the server side, representing the result set of the select statement. I won't discuss whether this execution produces instantly the whole result set, or whether this result set is generated on demand; I suspect that generation on demand is the policy adopted by most SQL servers.
Next, you'll find a very special instruction: Fetch. What is important to us now, is that BDE can only fetch the next record from the result set. It cannot fetch backwards, or skip intermediate records. This is true even for database servers that support server-side bi-directional cursors. So, you may move forward with the navigator's Next button, and you'll see the corresponding Fetch lines on SQL Monitor form. At this point, if you move the active row a few records backwards, you won't see any output. Sure, those records are stored in a client-located cache, so the BDE does not need to fetch them again. Bi-directional client-side cursors are currently implemented by the BDE, despite our server's capabilities.
Now, let's be prepared for the bad news: just click the Last button on the navigator, and see what's happening. In order to retrieve the last record from the query, the BDE needs to fetch every intermediate record! You will see as many Fetch instructions as records have the base table...
Let me propose a metaphor: a little man is on the right side of a river, and wants to reach the other side. He does it by dropping stones as he advances across the stream. Once he arrives to the opposite shore, a bridge has been built. Of course, he cannot jump straight to the middle of these troubled waters, as long as he tries to keep himself dry. He will need some supernatural force to succeed.
Initially, tables do wrong...
In order to test the table component, remove Query1 from the form and add a table. Assign again IBLOCAL in DatabaseName, and set TableName to Employee. Then, change its Active property to True, and redirect the DataSet property from DataSource1 to point to Table1. That's all. Now, clean the SQL Monitor log and execute the application.
What does all this madness mean? It is just that the table component needs to know its field and index definitions. The first statement verifies the existence of the table on the server:
select rdb$owner_name, rdb$relation_name, rdb$system_flag,
rdb$view_blr, rdb$relation_id
from rdb$relations
where rdb$relation_name = 'employee'
Next instructions retrieve information about fields, indexes and validations:
select r.rdb$field_name, f.rdb$field_type, f.rdb$field_sub_type,
f.rdb$dimensions, f.rdb$field_length, f.rdb$field_scale,
f.rdb$validation_blr, f.rdb$computed_blr,
r.rdb$default_value, f.rdb$default_value, r.rdb$null_flag
from rdb$relation_fields r, rdb$fields f
where r.rdb$field_source = f.rdb$field_name and
r.rdb$relation_name = 'employee'
order by r.rdb$field_position asc
select i.rdb$index_name, i.rdb$unique_flag, i.rdb$index_type,
f.rdb$field_name
from rdb$indices i, rdb$index_segments f
where i.rdb$relation_name = 'employee' and
i.rdb$index_name = f.rdb$index_name
order by i.rdb$index_id, f.rdb$field_position asc
select r.rdb$field_name, f.rdb$validation_blr, f.rdb$computed_blr,
r.rdb$default_value, f.rdb$default_value, r.rdb$null_flag
from rdb$relation_fields r, rdb$fields f
where r.rdb$field_source = f.rdb$field_name and
r.rdb$relation_name = 'employee'
order by r.rdb$field_position asc
Of course, these instructions take a time to execute and increment network traffic. Imagine a typical business application, with more than 100 tables on a data module. And think about what happens when the module is loaded, and tries to open the functional set of tables needed to start the application...
However, this is a problem with a very easy solution. Just turn on the ENABLE SCHEMA CACHE parameter, on your SQL alias definition. When this parameter is on, the information retrieved at the table prologue is stored in local files the first time the application executes. The master file's name is scache.ini, and BDE creates a .scf file for each table, on the application directory or on the path specified with the SCHEMA CACHE DIR parameter. When the application is run again, BDE extracts table definitions from these files, instead of querying the server. I will not discuss here all the implications of enabling the schema cache, because it's a well documented feature.
Finally, you'll see the expected statement that opens a cursor on your table:
select * from Employee order by Emp_No asc
But, wait a minute ... we did not ask any index to the table component. Why BDE has added this order by clause, on the primary key fields, to the cursor?
... but you must admit it's getting better
Here is the reason. You can have a big surprise when you click on the Last button from the navigator. Suddenly, the BDE closes its initial select statement, and prepares and executes the following one:
select * from Employee order by Emp_No desc
The first record fetched with the aid of this cursor will be the last record from the table, isn't it? And if you go backwards, the BDE continues fetching records from this cursor. I can't help thinking about Michael Jackson doing his moon-walk.
One more hint. As you must know, InterBase support both ascending and descending indexes. The dark side of this flexibility is that InterBase cannot optimise the previous SQL statement if it only counts with an ascending index for Emp_No. You'll have to create a descending index on the primary key (or the ordering column) if you want backwards navigation on an InterBase table. On the contrary, Oracle indexes can be scanned in both directions.
How Locate is implemented
But, what if the angel wants to have fun, and make us land on the middle of the river? Well, first of all, could angels do this sort of things? Yes, they can. This is just what happens when we perform a Locate operation on a table component. Drop an Edit control inside the form, add a Button, and handle its OnClick event this way:
procedure TForm1.Button1Click(Sender: TObject);
begin
Table1.Locate('EMP_NO', Edit1.Text, []);
end;
What the BDE does is to execute the following statement, without closing the active cursor:
select * from Employee where Emp_No = :1
The :1 token stands for a parameter, which is initialised with the value passed in the second parameter of Locate. So, locating a record given its primary key is a matter of fetching just that record, in case of success. But, what if the search is performed on a different column?
procedure TForm1.Button1Click(Sender: TObject);
begin
Table1.Locate('LAST_NAME', Edit1.Text, [loPartialKey]);
end;
This time, things are a little more complicated, because Delphi plays a role in this operation, and you'll see Delphi and BDE statements mixed on the SQL Monitor window. First, an exact search is attempted. After all, if your surname begins with 'Mac', maybe it is just Mac!
select * from Employee where Last_Name = :1
If this search succeeds, you already have the record. However, you can see how BDE apparently issues another Locate, this time on the primary key. What is really happening is that the first statement is launched by Delphi; take a look at the source code of the DBTables unit, and watch how the GetLookupCursor method is used by TTable. The second statement is executed once you have the primary key for the desired record, and its goal is to reposition the BDE cursor.
But if your surname starts with 'Mac' and it's not 'Mac', then it must come after the prefix in dictionary order. So, this is the statement that will appear on the log:
select * from Employee where Last_Name > :1 order by Last_Name asc
If this statement yields no record, the search fails. But if it does, the BDE cursor is repositioned again by means of another Locate on the primary key, with the value found. So, we needed a maximum of two fetches in order to get the record.
I did not tell how queries react to Locate, but I can do it now: they advanced their cursor, painfully fetching every intermediate record until they find the key or have reached the end of the cursor in vain.
Case insensitive search
But you must be very careful about what options are included in the third Locate parameter. When a case insensitive search is specified using the loCaseInsensitive option, the BDE/Delphi dream team fails, at least with InterBase, Oracle & MS SQL Server. Why? It's because CreateLookupCursor, the TBDEDataSet method we mentioned before, looks for a case insensitive index in order to ensure that a search query could take advantage of it. But neither InterBase nor Oracle do support this kind of indexes. And Microsoft SQL Server does it, but depending on a decision taken when installing the software, so Delphi cannot make a decision at runtime.
Filters
In most cases, filters are implemented by merging the filter expression in the where clause of the select statement. If you are planning to use a query based on a single table with a simple where condition, you can also consider using a table with a filter expression. You must also avoid case insensitive comparisons (option foCaseInsensitive, property FilterOptions). Partial matches, unfortunately, are not implemented well. We can set the filter expression to something like this:
Last_Name = 'Mac*'
In this case, filter expressions are evaluated at the client side. But if you force your neurones a little harder, you'll find out this equivalent expression:
Last_Name >= 'Mac' and Last_Name < 'Mad'
Another interesting technique is used when FindFirst, FindNext, FindPrior and FindLast methods are applied and a filter expression is specified (Filter) but not active (Filtered). Delphi implements this by launching a secondary query (yes, CreateLookupCursor again). The four previous operations navigate through the lookup cursor, and every time a record is found, Locate is used internally to synchronise the table.
If you can't say it with two words...
First of all, you need to understand this point:
- I'm not recommending indiscriminate browsing for client/server applications. If you can avoid grids, do it, as they consume too much bandwidth from your network.
But if you cannot exclude free navigation from your application, then follow these recommendations:
Do not ever use queries for browsing over medium to large result sets. The only justification would be the need to sort on some columns in descending order, something that BDE tables currently do not support. The definition for a "medium-sized result set" varies according to the database server and network configuration. If I had to say a number, I'd say about 500 rows.
Live queries without cached updates have no advantages over tables, as long as they execute the same prologue at its activation.
Use queries for small result sets that don't require insertions. I have had problems when adding rows to InterBase queries, forcing me to close and reopen the query after insertions. The extreme situation are singleton selects: queries are best suited for these purposes.
Table's implementation is far from perfection. Turn on schema caching. Avoid always case insensitive searches for servers that have no case insensitive indexes (including MS SQL Server). Be aware that some VCL components, like TDBLookupComboBox, may trigger these searches inside their implementation.
I'm amazed on the fact that Borland/Inprise has never give a hint on these very important topics. I have started to think on a plot devised by some political forces and ...
|
Ian Marteens is a guy with a serious schizophrenic behaviour. In his spare time, he likes to read books like "The Golden Bough" and "Hamlet's Mill" (I've told you his problem was very serious). He is the author of a book centred on database programming: "The Dark Side Of Delphi", currently published in Spanish. Ian can be reached here:
|
|