Home > Articles > Data > Oracle

  • Print
  • + Share This
From the author of Application Design

Application Design

Now that we have an optimal data model and index set, how do we construct the application to make best use of the database? There are probably three main considerations that affect application design:

  • Reduce unnecessary SQL requests, by remembering (caching) data and reducing SQL statement parsing.
  • Reduce network round trips, either by using stored procedures or by exploiting the array interface.
  • Reduce lock contention though sensible transaction design and locking strategies.

Caching

The less SQL we send, the lower our database overhead. For most applications, reducing SQL is achieved by caching frequently accessed static data.

Oracle 11g allows us to get the benefits from client-side data caching without the need to implement our own client-side caching code. The 11g client-side result set cache—officially known as the OCI consistent client cache—stores recently retrieved result sets in the client’s memory. If a client program—SQL*PLUS, java, C# or whatever—issues the same query twice within a certain interval, the client-side cache can return the previously fetched result set without sending the query to the database server.

However you implement caching, be careful not to cache data that is volatile—slow data is usually better than wrong data.

Reducing Parsing

Parsing is the process of preparing a SQL statement for execution, and involves checking syntax and permissions as well as creating an execution plan, which describes the step-by-step process Oracle will undertake to resolve the SQL statement.

Oracle maintains a cache of SQL statements in the shared pool. If a matching SQL is found in the shared pool, then most of the parse overhead can be avoided; this is a soft parse. If no matching SQL is found, a hard parse must be performed.

Hard parsing is expensive, and also is one of the most common causes of latch contention. You avoid hard parsing primarily through the use of bind variables. In Java, you do this by using preparedStatements with placeholders for the variable parts of the SQL. In C# (using ODP.NET), you add Parameters to the OracleCommand objects. Other languages (Perl, Python, Ruby, etc.) use different idioms to support bind variables.

If all else fails, the database CURSOR_SHARING parameter can be set to SIMILAR or FORCE to have bind variables automatically inserted into your SQLs by Oracle.

Array Processing

When fetching or inserting rows, Oracle allows the application to fetch or insert more than one row at a time. This reduces the number of calls issued to the database server, and can also reduce network traffic and logical IO overhead.

In JDBC, array fetch is as simple as setting the setFetchSize property of the statement. In other languages, or when performing array insert, the procedure can be more complex, but still well worth the effort. Figure 3 shows how using array fetch can improve performance.

Figure 3 Effects of increasing array size on table scan performance

Stored Procedures

Another way to optimize network traffic is to use stored procedures. A complex transaction may involve many individual SQL statements. If these SQL statements are sent from a remote application each execution will involve a network round trip and network overhead can easily dominate response time.

If the transaction is encapsulated in a stored procedure, then these round trips can be eliminated, and network overhead minimized.

Transaction Design

The need to maintain transactional integrity requires that Oracle sometimes prevent SQLs from executing simultaneously by using blocking locks.

Application transaction design should aim to hold locks for the least possible amount of time, although you should almost never sacrifice transactional integrity to achieve this. One of the most significant design decisions that affect lock duration will be the decision between employing the optimistic locking strategy and the pessimistic locking strategy.

The pessimistic locking strategy is based on an assumption that it is likely that a row will be updated by another user between the time you fetch it and the time you update it. To avoid any contention, the pessimistic locking strategy requires that you lock the rows as they are retrieved.

The optimistic locking strategy is based on the assumption that it is very unlikely that an update will be applied to a row between the time it is retrieved and the time it is modified. Based on this assumption, the optimistic locking strategy does not require that the row be locked when fetched, but it will be necessary to check that the row has not been changed by another session when being modified or locked later.

The optimistic strategy tends to hold locks for a shorter duration, though it can back-fire if another transaction breaks the optimistic assumption. Figure 4 compares the two approaches.

Figure 4 The optimistic and pessimistic locking strategies.

  • + Share This
  • 🔖 Save To Your Account