Home > Articles

  • Print
  • + Share This
This chapter is from the book

Exam Prep Questions

  1. As a developer for a large healthcare provider, you are assigned the task of developing a process for updating a patient database. When a patient is transferred from one floor to another, an internal identifier, CurrentRoomID, which is used as the primary key, needs to be altered while the original key, AdmittanceRoomID, is maintained. If a patient is moved more than once, only the original key and the current key need to be maintained. Several underlying tables have been configured for referential integrity against the patient table. These underlying tables must change in an appropriate manner to match with one or the other of the room keys in the patient table. These relationships will be altered based on different situations in other tables. Figure 3.5 illustrates the PatientTracker table design exhibit. What method would you use to accommodate the update?

  2. Figure 3.5

    Figure 3.5 The PatientTracker table design exhibit.

    1. Use the Cascade Update Related Fields option to have changes in the primary key automatically update the keys in all referenced tables.

    2. Use an indexed view to enable the user to make changes to multiple tables concurrently.

    3. Disable the Enforce Relationship for Inserts and Deletes option to enable an AFTER TRIGGER to handle the necessary changes.

    4. Define an INSTEAD OF UPDATE TRIGGER to perform the necessary updates to all related tables.

    Answer: D. The INSTEAD OF trigger was designed specifically for this type of situation and also to handle complicated updates in which columns are defined as Timestamp, Calculated, or Identity. Cascade operations are inappropriate because the updated key is not always stored. Indexed views by themselves do not allow for the type of alteration desired and would have to be complemented with the actions of a trigger. Disabling referential integrity is a poor solution to any problem, especially considering the medical nature of this application and the possible ramifications. For more information, see the earlier section "Trigger Utilization."

  3. A large organization needs to maintain image data on a database server. The data is scanned in from documents received from the federal government. Updates to the images are infrequent. When a change occurs, usually the old row of data is archived out of the system and the new document takes its place. Other column information that contains key identifiers about the nature of the document is frequently queried by an OLAP system. Statistical information on how the data was queried is also stored in additional columns. The actual document itself is rarely needed except in processes that print the image. Which of the following represents an appropriate storage configuration?

    1. Place the image data into a filegroup of its own, but on the same volume as the remainder of the data. Place the log onto a volume of its own.

    2. Place all the data onto one volume in a single file. Configure the volume as a RAID parity set and place the log into a volume of its own.

    3. Place the image onto one volume in a file of its own and place the data and log files together on a second volume.

    4. Place the image into a separate filegroup with the log on one volume and the remainder of the data on a second volume.

    Answer: D. Because the image data will seldom be accessed, it makes sense to get the remainder of the data away from the images while moving the log away from the data. This will help to improve performance while providing optimum recoverability in the event of a failure. For more information, see "The File System."

  4. An Internet company sells outdoor hardware online to more than 100,000 clients in various areas of the globe. Servicing the website is a SQL Server whose performance is barely adequate to meet the needs of the site. You would like to apply a business rule to the existing system that will limit the outstanding balance of each customer. The outstanding balance is maintained as a denormalized column within the customer table. Orders are collected in a second table containing a trigger that updates the customer balance based on INSERT, UPDATE, and DELETE activity. Up to this point, care has been taken to remove any data from the table if the client balance is too high, so all data should meet the requirements of your new process. How would you apply the new data check?

    1. Modify the existing trigger so that an order that allows the balance to exceed the limit is not permitted.

    2. Create a check constraint with the No Check option enabled on the customer table, so that any inappropriate order is refused.

    3. Create a rule that doesn't permit an order that exceeds the limit and bind the rule to the Orders table.

    4. Create a new trigger on the Orders table that refuses an order that causes the balance to exceed the maximum. Apply the new trigger to only INSERT and UPDATE operations.

    Answer: A. Because a trigger is already in place, it can easily be altered to perform the additional data check. A rule cannot provide the required functionality because you cannot compare the data. The CHECK constraint may be a viable solution but you would have to alter the trigger to check for an error and provide for nested operations. The number of triggers firing should be kept to a minimum. To accommodate additional triggers, you would have to check the order in which they are being fired and again set properties of the server and database accordingly. For more information, see "Trigger Utilization."

  5. An existing sales catalog database structure exists on a system within your company. The company sells inventory from a single warehouse location that is across town from where the computer systems are located. The product table has been created with a non-clustered index based on the product ID, which is also the Primary Key. Non-clustered indexes exist on the product category column and also the storage location column. Most of the reporting done is ordered by product category. How would you change the existing index structure?

    1. Change the definition of the Primary Key so that it is a clustered index.

    2. Create a new clustered index based on the combination of storage location and product category.

    3. Change the definition of the product category so that it is a clustered index.

    4. Change the definition of the storage location so that it is a clustered index.

    Answer: D. Because the majority of the reporting is going to be performed using the storage location, it would be the likely candidate. The clustered index represents the physical order of the data and would minimize sorting operations when deriving the output. For more information, see "Maintaining Order with Indexes."

  6. You are designing an application that will provide data entry clerks the capability of updating the data in several tables. You would like to ease entry and provide common input so the clerks need not enter data into all fields or enter redundant values. What types of technologies could you use to minimize the amount of input needed? Select all that apply.

    1. Foreign key

    2. Cascading update

    3. Identity column

    4. Default

    5. NULL

    6. Primary key

    7. Unique index

    Answer: B, C, D, E. All these options have activities that provide or alter data so that it does not have to be performed as an entry operation. In the case of NULL, data need not be provided, possibly because the column contains non-critical information. For more information, see "Keys to Success."

  7. A database that you are working on is experiencing reduced performance. The database is used almost exclusively for reporting, with a large number of inserts occurring on a regular basis. Data is cycled out of the system four times a year as part of quarter-ending procedures. It is always important to be able to attain a point-in-time restoration process. You would like to minimize the maintenance needed to accommodate increases and decreases in file storage space. Which option would assist the most in accomplishing the task?

    1. SIMPLE RECOVERY

    2. AUTOSHRINK

    3. MAXSIZE

    4. AUTOGROW

    5. COLLATE

    Answer: D. Use AUTOGROW to set the system so that the files will grow as needed for the addition of new data. You may want to perform a planned shrinkage of the database as part of the quarter-ending process and save on overhead by leaving the AUTOSHRINK option turned off. For more information, see "Creating Files and Filegroups."

  8. You are the administrator of a SQL Server 2000 computer. The server contains a database named Inventory. Users report that several storage locations in the UnitsStored field contain negative numbers. You examine the database's table structure. You correct all the negative numbers in the table. You must prevent the database from storing negative numbers. You also want to minimize use of server resources and physical I/O. Which statement should you execute?

    1. ALTER TABLE dbo.StorageLocations 
         ADD CONSTRAINTCK_StorageLocations_
      UnitsStoredCHECK (UnitsStored >= 0)
    2. CREATE TRIGGER CK_UnitsStored On _
         StorageLocations FOR INSERT, UPDATE AS
         IF INSERTED.UnitsStored < 0 ROLLBACK TRAN
    3. CREATERULE CK_UnitsStored As
         @Units >= 0
         GO
         sp_bindrule 'CK_UnitsStored''_
      StorageLocations.UnitsStored'
         GO
    4. CREATE PROC UpdateUnitsStored(@StorageLocationID_
       int, @UnitsStored bigint) AS
         IF @UnitsStored < 0
           RAISERROR (50099, 17)
         ELSE
           UPDATE StorageLocations
            SET UnitsStored = @UnitsStored
            WHERE StorageLocationID = @StorageLocationID

    Answer: A. You need to add a constraint to prevent negative data entry. The best method of implementing this functionality is a constraint. A trigger has too much overhead and the RULE is not accurately implemented. A procedure could handle the process but is normally used only for processes requiring more complex logic. For more information, see "What's on the Table."

  9. You are the administrator of a SQL Server 2000 computer. The server contains a database named Inventory. In this database, the Parts table has a primary key that is used to identify each part stored in the company's warehouse. Each part has a unique UPC code that your company's accounting department uses to identify it. You want to maintain the referential integrity between the Parts table and the OrderDetails table. You want to minimize the amount of physical I/O used within the database. Which two T-SQL statements should you execute? (Each correct answer represents part of the solution; choose two.)

    1. CREATE UNIQUE INDEX IX_UPC On Parts(UPC)

    2. CREATE UNIQUE INDEX IX_UPC On OrderDetails(UPC)

    3. CREATE TRIGGER UPCRI On OrderDetails _
      FOR INSERT, UPDATE As
         If Not Exists (Select UPC From Parts
       Where Parts.UPC = inserted.UPC)
           BEGIN
              ROLLBACK TRAN
           END
    4. CREATE TRIGGER UPCRI On Parts FOR INSERT, UPDATE As
         If Not Exists (Select UPC From Parts
       Where OrderDetails.UPC = inserted.UPC)
           BEGIN
              ROLLBACK TRAN
           END
    5. ALTER TABLE dbo.OrderDetails 
         ADD CONSTRAINTFK_OrderDetails_Parts _
      FOREIGN KEY(UPC)REFERENCES dbo.Parts(UPC)
    6. ALTER TABLE dbo.Parts 
         ADD CONSTRAINTFK_Parts_OrderDetails
       FOREIGN KEY (UPC)REFERENCES dbo.Parts(UPC)

    Answer: A, E. The UNIQUE constraint on the Parts table UPC column is required first so that the FOREIGN KEY constraint can be applied from the OrderDetails.UPC column referencing Parts.UPC. This achieves the referential integrity requirement. It also reduces I/O required during joins between Parts and OrderDetails, which make use of the FOREIGN KEY constraint defined. For more information, see "FOREIGN KEY Constraint."

  10. You are the database developer for a leasing company. Your database includes a table that is defined as shown here:

  11. CREATE TABLE Lease
    (Id Int IDENTITY NOT NULL
     CONSTRAINT pk_lesse_id PRIMARY KEY NONCLUSTERED,
    Lastname varchar(50) NOT NULL,
    FirstName varchar(50) NOT NULL,
    SSNo char(9) NOT NULL,
    Rating char(10) NULL,
    Limit money NULL)

    Each SSNo must be unique. You want the data to be physically stored in SSNo sequence. Which constraint should you add to the SSNo column on the Lease table?

    1. UNIQUE CLUSTERED constraint

    2. UNIQUE UNCLUSTERED constraint

    3. PRIMARY KEY CLUSTERED constraint

    4. PRIMARY KEY UNCLUSTERED constraint

    Answer: A. To obtain the physical storage sequence of the data, you must use a clustered constraint or index. Although a primary key would also provide for the level of uniqueness, it is not the desired key for this table.

  12. You are building a database and you want to eliminate duplicate entry and minimize data storage wherever possible. You want to track the following information for employees and managers: first name, middle name, last name, employee identification number, address, date of hire, department, salary, and name of manager. Which table design should you use?

    1. Table1: EmpID, MgrID, Firstname, Middlename, Lastname,
           Address, Hiredate, Dept, Salary. 
      Table2: MgrID, Firstname, Middlename, Lastname
    2. Table1: EmpID, Firstname, Middlename, Lastname,
           Address, Hiredate, Dept, Salary. 
      Table2: MgrID, Firstname, Middlename, Lastname. _
       Table3: EmpID, MgrID
    3. Table1: EmpID, MgrID, Firstname, Middlename, _
       Lastname,
           Address, Hiredate, Dept, Salary
    4. Table1: EmpID, Firstname, Middlename, Lastname, 
           Address, Hiredate, Dept, Salary. 
      Table2: EmpID, MgrID Table3: MgrID

    Answer: C. A single table could provide all the necessary information with no redundancy. The table could easily be represented using a self-join operation to provide the desired reporting. Join operations are discussed in detail in the Chapter 5.

  13. You are developing an application and need to create an inventory table on each of the databases located in New York, Detroit, Paris, London, Los Angeles, and Hong Kong. To accommodate a distributed environment, you must ensure that each row entered into the inventory table is unique across all locations. How can you create the inventory table?

    1. Supply Identity columns using a different sequential starting value for each location and use an increment of 6.

    2. Use the IDENTITY function. At the first location use IDENTITY(1,1), at the second location use IDENTITY(100000,1), and so on.

    3. Use a UNIQUEIDENTIFIER as the key at each location.

    4. Use the Timestamp column as the key at each location.

    Answer: A. Using identities in this fashion enables records to be entered that have no overlap. One location would use entry values 1, 7, 13, 19; the next would have 2, 8, 14, 20; the third, 3, 9, 15, 21; and so on. For more information, see "Data Element Definition."

  14. You are building a new database for a company with 10 departments. Each department contains multiple employees. In addition, each employee might work for several departments. How should you logically model the relationship between the department entity and the employee entity?

    1. Create a mandatory one-to-many relationship between department and employee.

    2. Create an optional one-to-many relationship between department and employee.

    3. Create a new entry, create a one-to-many relationship from the employee to the new entry, and create a one-to-many relationship from the department entry to the new entry.

    4. Create a new entry, create a one-to-many relationship from the new entry to the employee entry, and then create a one-to-many relationship from the entry to the department entry.

    Answer: C. This is a many-to-many relationship scenario, which in SQL Server is implemented using three tables. The center table, often referred to as the connecting or joining table, is on the many side of both of the relationships to the other base table. For more information, see "FOREIGN KEY Constraint."

  • + Share This
  • 🔖 Save To Your Account