- Why Transactions?
- Application Structure
- Opening the Environment
- Opening the Databases
- Recoverability and Deadlock Avoidance
- Repeatable Reads
- Transactional Cursors
- Nested Transactions
- Environment Infrastructure
- Deadlock Detection
- Performing Checkpoints
- Database and Log File Archival Procedures
- Log File Removal
- Recovery Procedures
- Recovery and Filesystem Operations
- Berkeley DB Recoverability
- Transaction Throughput
The fifth component of the infrastructure, recovery procedures, concerns the recoverability of the database. After any application or system failure, there are two possible approaches to database recovery:
There is no need for recoverability, and all databases can be re-created from scratch. Although these applications may still need transaction protection for other reasons, recovery usually consists of removing the Berkeley DB environment home directory and all files it contains, and then restarting the application.
It is necessary to recover information after system or application failure. In this case, recovery processing must be performed on any database environments that were active at the time of the failure. Recovery processing involves running the db_recover utility or calling the DBENV->open function with the DB_RECOVER or DB_RECOVER_FATAL flags. During recovery processing, all database changes made by aborted or unfinished transactions are undone, and all database changes made by committed transactions are redone, as necessary. Database applications must not be restarted until recovery completes. After recovery finishes, the environment is properly initialized so that applications may be restarted.
If you intend to do recovery, there are two possible types of recovery processing:
Catastrophic recovery. A failure that requires catastrophic recovery is a failure in which either the database or log files are destroyed or corrupted. For example, catastrophic failure includes the case where the disk drive on which either the database or logs are stored has been physically destroyed, or when the system's normal filesystem recovery on startup cannot bring the database and log files to a consistent state. This is often difficult to detect, and is perhaps the most common sign of the need for catastrophic recovery is when the normal recovery procedures fail.
To restore your database environment after catastrophic failure, take the following steps:
Restore the most recent snapshots of the database and log files from the backup media into the system directory where recovery will be performed.
If any log files were archived since the last snapshot was made, they should be restored into the Berkeley DB environment directory where recovery will be performed. Make sure that you restore them in the order in which they were written. The order is important because it's possible that the same log file appears on multiple backups, and you want to run recovery using the most recent version of each log file.
Run the db_recover utility, specifying its -c option; or call the DBENV->open function, specifying the DB_RECOVER_FATAL flag. The catastrophic recovery process will review the logs and database files to bring the environment databases to a consistent state as of the time of the last uncorrupted log file that is found. It is important to realize that only transactions committed before that date will appear in the databases. It is possible to re-create the database in a location different from the original by specifying appropriate pathnames to the -h option of the db_recover utility. In order for this to work properly, it is important that your application reference files by names relative to the database home directory or the pathname(s) specified in calls to DBENV->set_data_dir, instead of using full path names.
Non-catastrophic or normal recovery. If the failure is non-catastrophic and the database files and log are both accessible on a stable filesystem, run the db_recover utility without the -c option or call the DBENV->open function specifying the DB_RECOVER flag. The normal recovery process will review the logs and database files to ensure that all changes associated with committed transactions appear in the databases, and that all uncommitted transactions do not appear.