Home > Articles > Data > Oracle

Planning an Oracle Database

📄 Contents

  1. System Architecture
  2. Summary
This sample chapter from Oracle DBA on Linux and Unix covers basic physical database design. A database administrator needs to ascertain the business requirements and design the database to meet them. Good database design will ensure fast performance, simplified management, and protection from data loss. By understanding the types of applications the database will support and following some basic tuning guidelines, the needs of the corporation can largely be addressed with a good design. Poorly designed systems do not lend themselves to scaling regardless of the hardware or tuning thrown at them later. Once a bad process or design has been moved into production, it is very difficult to fix, so good planning saves time and money.
This chapter is from the book

This chapter is from the book

Database Planning Essentials

  • Planning is the most critical part of any project lifecycle.

  • There are many factors, both technical and non-technical, that impact the configuration of the system.

  • The role of the database within the overall system will dictate many of the parameters when planning the individual database.

  • Plan and configure the database to support the specific type of application to be implemented: OLTP, DSS, or a hybrid of both.

  • Use established design and tuning principles, such as OFA, and avoid resource contention when planning and configuring the database.

The planning and design phase of a database is the most influential factor in determining its performance and integrity. At this stage, the DBA should have identified the major business requirements. Now is the time to design a database to meet those business requirements. The DBA will look at the system architecture to determine where to put the database. This will depend largely on the type of application(s) the database will need to support. Next, the Unix/Linux server supporting the database will be selected based upon both technical and non-technical considerations. The DBA will then determine the physical structure and layout of the database files while focusing on performance and integrity. After these steps are completed and a solid database design has been created and reviewed, the actual process of building the system can begin.

System Architecture

A computer system is typically composed of at least three components: the applications, the data, and the network. A new component, the Internet (Web), is now in most systems as well. The location, configuration, and role of these components are what systems architecture is all about.

If each of these components (application, data, network, and Web) is designed properly from the outset, the system can run very well. If any of these components is mishandled or misplaced, technical tuning will most likely not prevent a large system from being slow.

This relates to the fact that even though a poorly conceived process can be sped up, it is still inefficient. These types of inefficiencies might be barely noticed on small systems, but they become major problems as the system grows. Poorly designed systems do not lend themselves to scaling regardless of the hardware or tuning thrown at them. There is a popular industry estimate that a $1 fix in the design phase costs $1000 to fix in a production phase. I'm not quite sure how that number was derived, but I will say that once a bad process or design has been moved into production, it is very difficult to fix.

The DBA needs to know where the database sits in relation to the other components in the system. This will dictate the inputs into the database, the outputs, and the processing requirements. How much of the database will support Web users? How much will be for OLTP users? When will batch jobs be executed on the server? Is this database the primary data store and what are its uptime requirements? These are questions the DBA should ask when examining the architecture.

Each of these architectural components is implemented as a tier on a machine. For example, the application tier can reside on a client's PC as a program while the data is on a Unix box in an Oracle database. The network simply connects these tiers.

An additional Internet tier composing a Web interface is now being introduced in many businesses, especially the dot-coms. Because there are already three tiers and the number of servers is scaleable, this is often referred to as the n-tier architecture. This architecture will be discussed in detail in Chapter 19, "9i Server New Features."

Next, take a look at some basic architecture types.

Two-Tier Basic Client Server Architecture (2 Tier)

The basic client/server (2 tier) architecture was briefly discussed in Chapter 2, "Architecture of the Oracle Server." This architecture has been in existence for several years and will likely continue for the foreseeable future. It is, however, being modified with the addition of dedicated application and Web servers. This architecture focuses on simply two entities: the client and the server.

Client

The client is the machine where the end user sits. It can be as simple as a dumb terminal with an Oracle Forms order entry application, or it can be a powerful PC with a Visual Basic application. The basic idea is that a user sits at this machine, enters requests, and data "auto-magically" appears on the screen. The user doesn't care where the data or application comes from, just as long as it appears when needed. This goal of transparency is what you want; the user should feel like everything is stored on his/her own local machine.

The role of the client has been in a state of flux. In mainframe times the client was just a dumb terminal with no processing power; it simply displayed information received from the server. This is referred to as a thin client. As PCs became more powerful, a greater deal of the workload was transferred to the PC. The idea was to split the processing between the client and server. This was termed a fat client because the client held at least some of the application.

Dumb Terminals Still Exist?

When the reference is made to dumb terminals, it is not solely a historical reference. Dumb terminals are still used by some companies, particularly telephone call centers. I don't see these terminals lasting for too much longer, but they still do exist in some businesses.

As of this writing, the trend appears to have shifted back in favor of the thin client. Some factors influencing this shift were the advent of Web browsers and more powerful servers but no corresponding jump in network throughput. Now most architects are attempting to move as much of the data, application, and processing to the server, thus leaving the client to only request and display information.

Author's Perspective: 20 years ago, the thin client was a terminal attached to a mainframe. Today, it's a Web browser connected to the Internet. In between these periods was the fat client phase. I view this as a pendulum swinging between thin and fat clients. I would be surprised if it stays this way forever, but for now, thin clients are the way to go.

Server

In a modern two-tier architecture the server holds both the application and the data. In this case, the application resides on the server rather than the client. The reason for this is that a Unix/Linux box will have a great deal more processing power and disk space than a PC, so why not use those resources? In addition, if a distributed processing situation occurs where huge amounts of data are being sent from the server to the client for processing, the network will get choked. Also, now that so much of the client requests are directed towards the Internet, it is not really practical to download the results to the client for processing. Figure 3.1 shows the resulting thin client architecture.

Figure 3.1 Thin Client and Server Overview

Three-Tier Client Server Architecture

In a three-tier architecture, the data and applications are split onto separate servers. The client still is used as a front end, simply requesting and displaying data. The server-side processing, however, is distributed between a database server and an application server.

There are several reasons for splitting the application and data tiers. By having two servers, each can be dedicated to processing either data or application requests. If both tiers reside on the same server, contention for resources can occur. Also, this isolation of tiers makes for a more manageable system. It is easier to configure and manage a box specifically dedicated to database activities or to application needs.

Figure 3.2 shows how the three-tier architecture is implemented.

Figure 3.2 Three-tier Architecture

Capacity Planning/Sizing

Capacity planning is the practice of sizing a system for the needs of the business. From the DBA's perspective, this means determining what kind and how big of a server and database are needed. The decision will normally be a joint one made by the DBA and SA, and then okayed by management. This is usually a tricky proposition because it involves estimating growth when there is often no baseline. This exercise often results in a potentially large purchase of expensive equipment, thus making the situation more difficult.

If the system is sized too small, more equipment will be needed just to keep the system going. This is expensive because of the lost performance, time to do an upgrade, and the equipment itself. Don't assume this will occur months or years later; an undersized system might become an issue very quickly, especially in a Web environment. If the system is sized too big, it looks like an expensive over-kill. Even if the system is sized perfectly for the first year or two, system resource requirements will change over time. If this looks like a difficult situation, it should, because it is.

The DBA and SA need to understand that their sizing will be questioned (as it should be) and will be second-guessed later (which is counter-productive). They have to make the best estimate possible based on the current (and usually incomplete) information. Make sure the estimate is reasonable, logical, and supported by numbers when possible. Finally, allow it to be analyzed by other technical people and management so they have their say.

Those are some of the issues I look at when I size systems. They fall into both technical and non-technical issues. In my opinion, it is better to error on the side of over-sizing rather than under-sizing. But remember, a well designed and tuned application will have a large impact on the size of machines needed for your system!

Technical Sizing Issues

The DBA usually estimates the disk space needed over a period of time (at least one year) and the SA takes it from there. However, there are a few issues the DBA should have input on.

Operating System

I won't get into a huge Unix versus NT debate here because it is safe to assume you've probably already decided on Unix because you are reading this book. Although Windows 2000 is now available and supposedly is improved, many IT people are still suspicious. Traditionally Unix has been viewed as being more stable, more configurable, and more scalable than NT. It will be interesting to see how Windows 2000 changes that perception.

Unix operating systems vendors such as Sun, HP, and IBM have enjoyed a strong position in the market. Most Oracle databases are on these operating systems. The DBA should consider this fact when selecting an operating system. Newer Oracle software releases will first appear on Sun and then on these other systems. It is almost inconceivable for Oracle Corporation to stop supporting any of these platforms. Bug fixes and new products will also come to these more common platforms before lesser-known platforms.

The DBA should also consider the skills and experience of the SA in these matters. It is preferable to run an operating system on a platform the SA feels comfortable with rather than experimenting. The DBA should also consider his or her own expertise on the operating system. Is it something new for the DBA? Are the differences between the OS's something the DBA is able to learn? When possible, the DBA should be willing to learn new platforms, as it will add to his or her own experience and marketability.

The growth and acceptance of the Linux operating system has provided another option to Oracle DBAs. Linux and the associated hardware is usually smaller scale and less expensive than the larger Unix machines. Therefore, Linux tends to compete more with NT than the large Unix systems do. In fact, based on what I have personally seen, Linux seems to be the system Unix people prefer when faced with going to Intel based systems. There are quite a few smaller systems such as individual Web servers and development/test boxes running on various flavors of Linux.

The guideline I've seen used for the most part is to use an established Unix vendor (Sun, HP, or IBM) for the large production machines and use a Linux distribution for smaller testing or development machines. Even this, however, is changing as more companies, particularly smaller dot-coms, are choosing to run everything on Linux. From a DBA perspective, I prefer an established Unix vendor to run anything production. If I'm doing work on my own (such as this book) or doing testing/development, I prefer Linux.

Disks

This is typically a SA decision, but generally the more and faster the disks, the better. The DBA should request enough disk space for at least one year of database growth. From an Oracle perspective, having a larger number of disks to spread I/O across is far better than having a few large disks. I will show some examples of this later in this chapter. Also, having the disks spread out across multiple controllers is beneficial. The more controllers you can spread the I/O across, the better. The DBA should push for disks that are hot-swappable. The choice in disk brands and Logical Volume Managers (LVMs) is really an SA call, but the DBA should be familiar with the available products. If the database will be a production system, some degree of disk mirroring should be used.

Discussion of RAID, raw filesystems, and I/O caching is covered in detail in Chapter 13, "Unix Server Monitoring."

Memory

In terms of memory, the more and faster, the better. Oracle likes disks, but it likes memory more. Production Oracle SGAs typically start around 200M and can easily reach 500M. And that is real memory, not swapped out to a disk. Also, one server will likely have several database instances. Don't forget to consider other applications running on the box besides Oracle. If there is only enough memory for Oracle, other applications might be continually swapped to disk because of memory shortages, which will affect the entire system. As a DBA, you should try to convince the SA to max out the memory on the box if at all possible.

CPU

This really the SA's call, except that the box should have multiple processors (SMP). Oracle will take advantage of multiple processors and it is actually required for some features such as Parallel Query. Most systems are not CPU bound unless there is bad code running, in which case extra processors will not help. Regardless, go with multiple processors if possible to take advantage of Oracle's parallel processing. We will discuss more exotic architectures such as MPP and NUMA in Chapter 12, Unix Operating System Architecture.

Non-Technical Sizing Issues

The non-technical aspects of system sizing are often overlooked. This includes more than just the price tag involved.

Machine Cost

This is one item every manager considers. Often times when they see a server exceed $20,000 they go into a state of stock. After all, their PC only costs $1,000 so why should a slightly bigger computer cost so much more? At this stage, the technical person needs to explain in non-technical terms why a Unix server is in a different class than a PC. It helps if you justify the costs in terms of lost revenue because of downtime. I've been in this situation before and it's easy to get frustrated. Simply be professional and explain why this is a cost of doing business.

If purchasing a machine is impossible, look at other options. This means searching for leases, loaners, and special deals. Some vendors and hardware resellers will allow a shop to borrow a machine for a proof of concept. Once a server is actually installed and running a test system, it is easier to get managers to fund it. Server trade-ins and leases are also possible; check with the hardware reseller. Also look for special promotional deals. For example, there recently was a deal where Internet startups could get Unix servers cheap and database software at reduced cost.

When getting a new machine isn't possible, you are left to using what is already on hand. Often an IT shop has a server that can support the system, but it is usually less than optimal. In this situation, the server is usually older, not configured properly, and is already supporting system(s). See about getting such servers upgraded. Also, sometimes several machines can be cannibalized into a better server. Work such as this will have to involve the SA.

System Maintenance

Supporting the system is often overlooked. Many times, companies purchase a state-of-the-art system (hardware and/or database), but don't have any one in the shop who can support it, so therefore it is even more unreliable than a basic system. This happens more often than you think. Being a "regular" DBA or SA is tough enough, but having to deal with new features such as Web servers or clusters will only add to the difficulty. Simply assuming that a technical person can "pick up" the specifics on the job is a costly assumption. Also, just because someone goes to training doesn't mean they can manage a real system. Even if the DBAs and SAs can manage a complex system, remember that they are highly sought-after individuals and might leave for better opportunities.

Keep Your Talented People or the System Will Suffer

I was at one shop that had purchased an early high-availability Sun Cluster where we ran Oracle Parallel Server. The system itself was fairly stable when we had a highly skilled SA who knew about clusters. However, within days of that SA leaving, the system began crashing and the replacement SAs had a terrible time keeping it up. Although they were good SAs themselves, they lacked the skills with clusters. Ultimately, it was an expensive learning curve for the company.

I recommend using new technologies where they make sense; just don't assume that they run themselves. If you find yourself in a shop getting Oracle Parallel Server (OPS) or other advanced features, be prepared to do some extra work. Try to go to training if possible and do some reading and research on your own. If you can become proficient, you will find it a rewarding experience.

Technology/Vendor

Become familiar with the technologies you are examining before you buy. That sounds obvious, but it is easy to skip. For example, if the hardware vendor is discussing SMP (Symmetric Multi-Processors) systems and MPP (Massively Parallel Processors) systems, do you know the difference? Are you willing to trust the vendor's explanation as to why you need one over the other? Time to do some research if you don't know.

Examine the vendor you are buying from. Sure, Sun, HP, and IBM are the big names in Unix today, but what about other options? This includes Linux. If you decide to go with a specific Linux distribution, will it still exist in the future? Don't go with an OS that is on the way out. Small, local vendors and resellers should also undergo this scrutiny.

From a DBA perspective, consider which versions of Oracle are supported on the OS. For example, Oracle develops and releases each new version on Sun Solaris. They then port to other popular OS's. When dealing with Linux, check on the Oracle online support pages (MetaLink and Technet) to see whether Oracle is certified for a specific distribution and kernel. This is where research on behalf of the DBA is necessary.

Ask DBAs and SAs at other shops about their experiences on a specific platform and database. These people can provide insights and technical advice that can be hard to find elsewhere. If you are making a case for or against a specific system, these kinds of testimonials can carry a lot of weight.

Optimal Flexible Architecture

Now that you have planned and chosen your system, it is time to consider the Oracle-specific details.

Oracle Corporation has a recommended method of installing and configuring its databases on Unix. Originally written by Cary Millsap, this is called Optimal Flexible Architecture (OFA). This is a common method of installing Oracle and creating databases in a standardized manner. Another way to explain what OFA is would be to describe it as a recommended directory structure for Oracle software installations and database files.

OFA allows multiple versions of Oracle to be installed on one server. Many databases can be managed simultaneously with a minimum of contention. Benefits of OFA include:

  • A standardized configuration across all servers

  • Easily manageable installation and maintenance of different Oracle versions

  • Separation of Oracle software files from database files

  • Separation of database files from one another to improve manageability and performance

  • Logical and distinguishable database file names

  • Separation of files across multiple mount points to reduce I/O contention

The OFA is simply a set of recommendations, not a set of absolute rules. However, Oracle installs the software in an OFA-compliant manner if you create your filesystems correctly. Building the database in an OFA-compliant manner is discussed in this chapter and is demonstrated in Chapter 5, "Creating a Database." That chapter does not rehash all the points of OFA, but does cover what you need to know.

Create multiple filesystems to spread your files across as many disks as possible. At one shop I worked at, we had a Sun E450 with 16—18G disks available for the database. I specified disk mirroring (see Chapter 4, "Machine Setup and Installation"), which effectively reduced the number to eight disks. Based on that, a filesystem was created on each mirrored group. Mount points /u01 through /u08 were created to hold Oracle software, configuration, and database files.

OFA Minimum Requirements

At a minimum, an OFA compliant database requires four mount points: /u01 for the software and /u02, /u03, and /u04 for database files.

A mount point is a Unix term for a filesystem. A filesystem is, from a DBA's perspective, a directory structure that holds files. It can span multiple disks and can be "mounted" from a different machine over a network (Network File System, NFS). Work with the SA to get multiple filesystems created. Each mount point is typically named with the convention /uN where N is the filesystem number (such as /u01).

The next section discusses the OFA structure by looking at what goes under /u01 and then examining the rest of the OFA.

Files on /u01

Underneath the first mount point, create an app subdirectory. This is where the application runs. Next, on /u01/app, create an oracle subdirectory. Beneath the /u01/app/oracle directory, an important split takes place. The subdirectories /product, /local, and /admin are created, as shown:

/u01/app/oracle/product
" " " /admin
" " " /local

Under ../product, each version of the Oracle database is installed. For example, if you were running Oracle 7.3.4 and Oracle 8.1.6 on the same machine, you would have:

 /u01/app/oracle/product/7.3.4
 " " "  " /8.1.6

This is an important distinction. You can have multiple versions of Oracle running simultaneously, but the software is stored in different locations. A common mistake is to try to load all the software in one directory and expect Oracle to figure it out. That won't work! Oracle knows which version you want to run by setting your Unix environment variable $ORACLE_HOME. This variable defines which Oracle software executables are used. For example:

 echo $ORACLE_HOME
 /u01/app/oracle/product/8.1.6

This shows that your Unix environment is set to run Oracle 8.1.6. All the Oracle software installation files for 8.1.6 are stored in this directory.

Under the ../admin directory are all the startup, configuration, and log files for each database instance, regardless of version. For example:

 /u01/app/oracle/admin/prod_1
 " " "  "  /dev_2

This indicates there are two databases on this machine (prod_1 and dev_2). Prod_1 is an Oracle 7.3.4 production database while dev_2 is an Oracle 8.1.6 development database. If you examine dev_2, you might find the following subdirectories:

  /u01/app/oracle/admin/dev_2/adump
  " " "  "  "  /bdump
  " " "  "  "  /cdump
  " " "  "  "  /pfile
  " " "  "  "  /udump

Each of these subdirectories hold files specific to the database instance dev_2. The directory structure and most of the files are created during database installation. You can now briefly examine the contents of each of these subdirectories.

../adump

adump contains trace files created by the database for auditing purposes. Every time a user connects internal to the database, this event is logged and a small, unique trace file is created.

../bdump

The bdump directory contains perhaps the most important file to the DBA: the alert_sid.log file. This file, commonly referred to as the alert log, contains diagnostic and error messages about the database. Every time the database is started or is shut down, messages and parameters are written to this file. The DBA needs to check this file several times daily.

Background dump (bdump) files are generated when an Oracle process experiences unexpected problems. A message stating a bdump file was generated is sometimes written in the alert.log. The DBA should examine the bdump trace files and try to determine what caused them. These files should not normally be deleted and, when serious problems occur, they can be sent to Oracle Support for detailed analysis.

../cdump

Core dump files are sometimes generated when an Oracle process has really big problems. These files are very large, sometimes filling up an entire filesystem. They are not much use to the DBA, but if needed, Oracle Support can analyze them. Unless there is a real need to keep and compress a core dump, it is usually best to delete the file.

../pfile

The pfile directory contains the init.ora file for the database. This is the configurable startup file that contains database parameters. The details of this file were discussed in the previous chapter.

../udump

User dump (udump) trace files can be intentionally generated by the DBA or developers. These files contain a wealth of diagnostic information, including the SQL being executed inside the database. The DBA and/or developers can turn on tracing to generate these files. The files are then analyzed to identify problems or tune SQL.

Under the /u01/app/oracle/local directory are general-purpose files for the Oracle user and DBAs. It is common to have a sql directory containing DBA SQL scripts here. Unix shell scripts for the DBA are sometimes stored here. The crontab file for the Oracle user (see Chapter 6, "Daily Activities") is usually stored here. Import and export dump directories can sometimes be stored under the ../local directory, but that varies quite a bit from shop to shop. Export and import files are also often stored under the ../admin/database_sid/exp directory.

Notice that nowhere under /u01 have data files been discussed. That is because, ideally, database files will not be located under /u01. From an administration standpoint, it is easier to maintain the separation between actual data files and the Oracle installation/configuration files. This simplifies monitoring space usage, backups, and performing recoveries.

Data Files and Other OFA Conventions

There should be at least three mount points dedicated to data files. An example is as follows:

 /u02/oradata/prod_1/system_prd_01.dbf
 " "  /dev_1/customer_dev_dat_01.dbf

Physical database files are stored under the ../oradata directory. Notice the subdirectory identifying the database each file belongs to. Also notice the descriptive file names identifying the database (dev or prd)—these aid in administration. In this case, there are no other files on /u02. It is okay to have additional database files on this filesystem, but do not include non-Oracle files or Oracle configuration/installation files.

File names can be in any form; Oracle doesn't care. However, it is a good idea to make them descriptive. Identify the tablespace, database, file number, and file type in the name. For example, customer_dev_dat_01.dbf tells you this is in the customer tablespace of the development database; it is a data (rather than an index) file; it is the first file in the tablespace (you don't know out of how many); and it is a .dbf (data) file.

Typically, a database will have a very similar core set of files for the SYSTEM, USERS, TEMP, and TOOLS tablespaces. When you're dealing with multiple data and index tablespaces, the names can get long.

Officially, the recognized file extensions are limited to .ctl for control files, .log for redo log files, and .dbf for all other data files. It's my opinion that this is a little restrictive. I have an even bigger concern about the redo log files being .log. This can give the impression that these files are viewable and somehow less important because they are just log files. In reality, if you lose one of these, you can be in big trouble. File extensions that I have seen used are listed here.

File/Tablespace Type

Extension

Control

.ctl

SYSTEM

.dbf

USERS

.dbf

TOOLS

.dbf

TEMP

.dbf

DATA

.dbf, .dat

INDEX

.dbf, .idx

ROLLBACK

.dbf, .rbs

Redo log

.log, .rdo


Those are the basics of setting up an OFA database installation. As the database is actually configured, created, and managed, the benefits to this structure become apparent.

Application and Database Considerations

To design and configure a physical database, you need to know what kind of transactions will be occurring and with what frequency. You can size and configure an Oracle database very effectively to handle long, intensive transactions, or you can set it up for many small quick transactions. Oracle can handle either situation very well. The trick is identifying what is actually being asked of your database and then making the appropriate settings. Keep in mind that these requirements are seldom clearly defined and they usually change.

There are two popular types of database setups: OLTP and DSS.

Online Transaction Processing (OLTP)

OLTP is characterized by many, short transactions hitting the database with a high frequency. For example, the database can have hundreds of users in a call center taking orders from customers over the telephone. A sample transaction is as follows:

  1. A customer calls the call center operator.

  2. A selection on the customer's record occurs to get basic information.

  3. The customer wants to order a product.

  4. An insert is made into the ORDER and BILLING tables for the new order. An update to the CUSTOMER table might occur.

  5. The operator commits the transaction and the customer hangs up. This is the end of the transaction.

This is not a long or intensive operation. All the data selected should be indexed and the INSERT statement is only one row in a few tables. This represents a typical OLTP transaction.

Designing for OLTP is not a problem when there are only a few users and a small CUSTOMER table. Problems start to occur when there are hundreds of users taking orders very rapidly. Problems increase when the users are spread out over a wide geographical area.

Another caveat of OLTP systems is the perceived need to be highly available. Many organizations are convinced their OLTP system needs to be fully operational 24 hours a day, 365 days a year. This brings on some very special needs and is quite expensive. Clustered servers, parallel server databases, and a very proficient staff are needed to provide this type of availability. Because of these reasons, most systems are not truly 24_7, but high availability requirements do exist.

Technical considerations for OLTP systems are as follows:

  • Have many, small rollback segments. The standard is one rollback segment for every four active transactions. Each transaction will likely take one (maybe more) extents. This can be a problem, but the bigger problem is the rollback segment header. This header must be allocated initially to find an available block in the rollback segment. If several transactions are fighting each other over this header, the system will suffer.

  • Size the redo log buffer appropriately. A redo log switch is a performance hit, so you don't want these occurring all the time. Size the redo log files to about 100M to start with.

  • Have a large shared pool. Think about the previous sample transaction. Doesn't it sound reasonable for the same SQL to be executed many times, just with different customers? If you use bind variables for the customer name, the SQL will be cached in memory and the performance will improve.

  • Have separate indexes from the data. Create your primary key and all other indexes in a separate tablespace and put that on a separate disk and controller.

  • Use small temporary segments. Little sorting will be needed and most of it will occur in memory. If many large joins are part of the SQL, this can cause writes to the TEMP segment. In this case, see about tuning the SQL to use fewer joins or increase the memory with SORT_AREA_SIZE. OLTP transactions should not go to disk.

  • Look at using MTS, especially when you have a large number of users. This can reduce the total amount of memory needed, but remember this comes at a performance hit.

OLTP is the most common application to manage. If it was the only type of transaction to plan for, the DBA's job could almost be easy. Unfortunately, the DSS transaction often finds its way into a system as well.

Decision Support Systems (DSS)

Decision Support Systems (DSS) are, for the purposes here, business-oriented batch-processing systems. This chapter does not get into the caveats of running a data warehouse or number-crunching system for scientific research, even though the same principles apply. Batch processing involves a few, very large and intensive queries hitting many tables and requiring many sorts. Nightly processing tasks are considered DSS. A sample DSS job is as follows:

  1. Create a report for management to identify sales over the past month. Update and cancel any orders that still have a "pending" status.

  2. Select all the orders that were valid over the past month, group them by payment option, and sum the results. Also break this down by the customer's state of residence.

  3. This requires a select on the ORDER table, as well as sorting and summing the data, which would likely require using the temporary tablespace. Queries against the CUSTOMER table also take place.

  4. Update the orders with a "pending" status. These orders reside in the rollback segments for the length of the transaction.

  5. Format the results and create the report.

This is a relatively simple report; most businesses have more complex needs and business rules. However, the idea behind this type of system is a few, very intensive queries hitting the database. These queries can take hours or even days to run. If the job fails, the rollback process can be quite lengthy before the job is restarted. In the meantime, management wants their reports and they frown on hearing "sorry, the job failed." This is even more critical during month-end or year-end processing.

Key components of supporting batch processing are as follows:

  • Have fewer, larger rollback segments. DML in a DSS environment can take a large amount of rollback and can hold it for a long time. You do not want a job to die because the rollback segment ran out of space. Create an especially large rollback segment for the big jobs. Before the SQL statement, use SET TRANSACTION USER ROLLBACK SEGMENT segment_name to force a transaction to use a specific rollback segment. You learn about how to avoid SNAPSHOT TO OLD errors and rollback segment tuning in Chapters 10 and 11, respectively.

  • Have a large temporary tablespace created for disk sorts and joins. DSS will often exceed the SORT_AREA_SIZE parameter and go from memory to disk. These sorts can be large and you don't want to run out of room.

  • Use a smaller shared SQL area in favor of a larger database buffer cache. The SQL statements are not going to be executed so frequently that gains made by sharing the SQL are noticeable. Instead, increase the size of the database buffer cache so more data blocks and rollback segments can be cached in memory.

  • Make sure you are running well-tuned and properly indexed SQL. This applies to both OLTP and DSS, but I have seen processing times reduced to a fraction of what they originally were simply by SQL tuning and indexing. This can save hours of processing per SQL statement and reduce stress on the rollback segments. Eliminating unnecessary sorts also helps reduce the use of the temporary tablespace.

  • Use partitioning and the Parallel Query (PQ) option where applicable. If your CUSTOMER and ORDER tables are huge (millions of rows), partitioning and using PQ might provide some performance benefits. These options start getting into the realm of VLDBs (Very Large Databases) and data warehousing, but if they can help your batch-processing efficiency, use them.

Systems dedicated to DSS are not as common as OLTP systems, but they do exist. However, OLTP systems that have some DSS requirements are more common. These are called hybrid systems and are discussed in the next section.

Hybrid Systems

Hybrid systems are those with a mix of OLTP and DSS needs. Although most applications are OLTP in nature, most systems are really hybrids. For example, virtually any business that takes orders from customers is OLTP. However, how long can those businesses exist without financial or sales reports? Although the OLTP application is where the company actually makes the money, there are back-end support functions that need DSS (batch) processing.

Review the needs of OLTP versus DSS. Do they look like they are in opposition? They should because the fundamental characteristics of OLTP are exactly the opposite of DSS. If you optimize a system to serve one, the other suffers. This can pose a dilemma for a DBA trying to please both the OLTP users (those who bring revenue to the company) and the DSS users (the managers who the DBA reports to).

There are three common solutions to this dilemma: throttle between OLTP and DSS on one system, run two separate systems, or run OLTP and DSS simultaneously on one system. Figure 3.3 shows how the different hybrid systems can be addressed.

Figure 3.3 Handling Hybrid Systems

Run OLTP during the day and do batch processing at night. If the users are taking phone orders from 7-6, simply run the database configured for OLTP during that time. At 6PM, bounce the instance and bring it up using a DSS configuration. Whenever the nightly backups and batch processing are done, bounce the database again but restart it using an OLTP init.ora.

This works okay if a few assumptions can be made. The hours of OLTP need to be well-defined and not 24_7. Users can use the system after it has been bounced for DSS operations, but performance would be impacted. Depending on the call volume in the evenings, this might be acceptable from a business standpoint. Another issue involves Web users. The DBA needs to examine the impact that DSS and bouncing the server have on Web users. Remember, Web access and online ordering implies a 24_7 uptime requirement.

A second solution is to run two databases simultaneously, one for OLTP and one for DSS. Real-time updates are made to the OLTP database. At a regular time interval, these changes are propagated to the DSS database.

There are several methods available to do this. The DBA can perform nightly exports of some or all of the OLTP system and import into the DSS system. This method is conceptually simple, but is time consuming and might not meet the needs of the DSS system. A better solution is to use replication over database links to propagate the changes in a real-time manner. A third option available in Oracle 8i is to run the DSS database as a standby database. Archive logs from the OLTP database are automatically applied to the DSS database. This keeps the DSS system just a few steps behind the OLTP system, but as of now, only read-only queries are allowed against the DSS. The DBA will need to examine these options and perhaps use a mix of them to support a second DSS system.

The third option for hybrid systems is to run both OLTP and DSS simultaneously. This is probably the most common solution imposed by DBAs. Often times, a system starts off as OLTP, but DSS requirements are added gradually until the system becomes a hybrid.

In this case, the DBA must balance the needs of the OLTP users against the batch-processing tasks. I tend to favor the OLTP users in this case. Create as many rollback segments as you do for OLTP, but also have a few large rollback segments that you assign specifically to large transactions. Keep both the database buffer cache and shared SQL areas a reasonable size.

Make sure the temporary tablespace is large enough for the batch sorts, but use SORT_ AREA_SIZE to keep the OLTP sorts in memory. I recommend using a dedicated server, not MTS in this situation. If MTS is used, configure it so batch job users have a dedicated server. It is possible to serve both the OLTP and DSS users, but it is a tricky proposition.

There is one final option that is really a derivative of running both OLTP and DSS simultaneously. Oracle Parallel Server (OPS) allows multiple instances to access one physical database. One instance is configured for OLTP and another instance is configured for batch processing. Assuming the system is designed to handle this type of processing and assuming a qualified staff is available to manage the system, this is a good option because it provides both application partitioning and high availability.

Multiple Applications in One SID

In large IT shops, it is not uncommon for a single database SID to support multiple, unrelated applications. This is necessary to keep from having to support hundreds of databases simultaneously. Even if the applications are different, try to separate OLTP apps from DSS apps and then design the databases accordingly.

Avoiding Disk Contention

Designing the layout of the data files across the mount points is very important for performance. No matter how much memory your machine has, disk I/O will be necessary and it will hurt performance. However, if highly accessed files are on separate disks, this effect can be minimized. Take a look at each file type in terms of contention.

File Type

I/O Activity

Control

Relatively little access.

SYSTEM

Relatively little access.

USERS

Little access if objects are created with a separate tablespace defined.

TOOLS

Little access.

DATA

Depends on the table. Can range from very high to very low. Identify and segregate high activity tables.

INDEX

Depends on the object indexed, but it often corresponds to the table activity.

TEMP

Depends on the amount of disk sorts. Low for OLTP but can be high for DSS or if the SQL is poorly tuned.

ROLLBACK

Very high.

Redo log

Very high.


Based on this table, you want to separate the high activity data and index files, the rollback segments, and the online redo logs. After those have been assigned to separate disks, you spread out the rest of the less contentious files. Ideally, the DBA will know which tables/ indexes are used most frequently and will have many disks on which to allocate files. Unfortunately, this is seldom the case and the DBA has to work with whatever information and resources are available.

Before you learn to review a sample layout, there are a few other items to remember:

  • Mount points can be composed of multiple disks, and those disks might be part of several mount points. For example, disk A might have 9G allocated to /u02 and 9G allocated to /u03. If you put a rollback tablespace on /u02 and a redo log on /u03, you will have contention on the same physical disk even though you think you have distributed the files properly. Work with the SA and know which filesystems mount to which disks and which disks share which controllers. This will also affect your backup and recovery plans.

  • Production databases are placed in archive log mode. This means that after every redo log switch, the online redo log files are copied by ARCH to another location (disk, tape, or optical media). This area will be highly active and needs to be large. If ARCH cannot write to this location (because the disk is full), the database will hang and no DML will be allowed. Therefore, make sure this disk has plenty of free space and monitor it regularly. Also, if there is so much I/O contention that ARCH cannot write the archive redo log before that online redo log file is needed, problems will occur. Therefore, make sure this disk (defined as ARCHIVE_DUMP_DEST) has plenty of space and is free from contention. Do not place online redo log files and archive log files on the same disk.

  • Create at least three, perhaps even more, online redo log groups. Because the database will likely be in archive log mode, the extra groups will give ARCH time to write to the archive log before LGWR needs the group again. Also, multiplex the online redo logs.

  • Consider multiplexing your data and index tablespaces. In this context, a tablespace will be composed of files on separate filesystems. For example, if you know a table will get big and you want to break it apart, place the data files for the tablespace on /u02 and /u03. The first data file created (*_1.dbf) will be used first, and then extents in the second data file /u03 will be used. Just be sure that you don't put the data and index files for the same object on the same disk.

  • If a table will get really big, a better solution is to use partitioning. The table (and indexes) can be created across multiple tablespaces. This will spread out I/O and can be used to physically organize the data based on a logical value (using range partitioning).

  • Consider backup and recovery when dealing with designing the database. Examine the impact on the database for the loss of each disk. For example, if disk A comprises filesystems /u01, /u03, and /u05, what is the impact of losing disk A? If your system is OFA compliant, you lose all the Oracle installation software on /u01. Hopefully you don't have all your control files or all the members of an online redo log group on /u03 and /u05. If you do, you're in trouble. This type of planning is well worth the time and will be very beneficial when developing your backup and recovery plans.

Take a look at a sample database layout, shown in Figure 3.4. Assume the system is a hybrid of OLTP and DSS. Also, it contains one data file per tablespace and the disks are mirrored with two controllers. It attempts to be OFA compliant, but some of the naming extensions vary.

This is typical of many Oracle installations, except there are more data and index tablespaces. A few possible I/O contention issues were intentionally introduced to make the design more realistic. Make note of the following:

  • The mount points are separated by controller, using even and odd numbers to identify controller A and controller B. The DBA won't always have this luxury, but the DBA should dictate to the SA how many filesystems are created and the naming conventions when it pertains to Oracle.

  • /u01 has been reserved for the Oracle installation files and the database parameter and configuration files. It is possible to add actual database files here, but it makes administration more difficult.

  • Online redo log groups have been multiplexed. There are three groups with two members each. Each member is on a separate disk on a different controller. They are also named in a logical manner. For example, when group 1 is active, /u03 and /u04 will be written to frequently. When a log switch occurs, group 2 (/u07 and /u08) will be written to by LGWR and ARCH will read from the first group and write to the Archive Log Dump Destination on /u10.

  • Figure 3.4 Sample Database Layout

  • /u10 holds the Archive Log Dump Destination. This disk will continually be written to by ARCH, so only a small control file is placed on it. Monitor this location frequently to ensure that it does not run out of space.

  • The control files are multiplexed across four disks and two controllers. Some DBAs would have even more than four copies, but if these disks are mirrored, that should be enough.

  • Two rollback tablespaces are created on separate disks. /u05 holds the rollback segments for OLTP users. Depending on the number of users, there can be even more rollback tablespaces created if needed. /u06 holds the large rollback tablespace for large batch processes and data loads (for the DSS users). Remember, when dealing with batch jobs, that a transaction must be directed to a specific rollback segment. Otherwise, there is no guarantee that the transaction will choose the correct segment.

  • The TEMP tablespace is located on /u03. This is a possible problem because that disk also contains a redo log member. In real life, there are usually not enough disks to dedicate one for each file. If the application is composed mostly of OLTP (few sorts and joins), this will be fine because TEMP will seldom be used. If the application is composed primarily of batch processes and uses the TEMP tablespace frequently, contention will occur.

  • Data and index tablespaces are separated on different disks and controllers. The DBA will hopefully know which tables/indexes are active and which are not so I/O can be balanced. Notice that no data or index files were placed on disks with rollback segments. Avoid trying to compete with rollback segments as much as possible. If necessary, place seldom used or updated tables and indexes with rollback segments. Also, note that disks containing online redo logs are lightly loaded with index and data files. The DBA will often have to place data and index files with contentious online redo logs, but try to put the least used tables/indexes on these on these files.

  • Miscellaneous Oracle files such as oratab, tnsnames.ora, and listener.ora are placed in default directories during the Oracle install. The DBA has limited control over the location of these files. It is important to note that they do fall outside the normal filesystems used by the DBA, but they do still need to be configured, monitored, and backed up.

Sharing Filesystems

The sample database shown in Figure 3.4 addresses only one actual database. Often times, the DBA will need to share the filesystems with more than one database. In this case, the DBA must coordinate the design and creation of any future databases with the needs of the existing database.

This database design is simply an example. In real life, more data and index tablespaces would be created and the diagram would be more cluttered. However, the OFA principles still need to be followed when applicable. Files would be separated based on contention and attention would be paid to the consequences of losing any given disk.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020