Perhaps the most common metadata storage mechanism is the centralized, custom-developed metadata database. This option seems to be the easiest to implement; most beneficiaries appear to get what they want immediately. In fact, the population of this storage solution is also quite simple; many implementers manually reinput metadata that already exists throughout the organization. There are many options to the storage scenario, including not storing the metadata directly in the repository solution. The pros and cons of each option vary with the timing and planned duration of the metadata solution.
Once the metamodels are defined and the metadata instances have been officially sourced, it is important to decide how the metadata instances will be a part of the metadata solution. Simply speaking, this means deciding how and where to store the metadata. There are various options, none of which is necessarily better or worse than the others, but they depend on many architectural issues at hand, all of which need to be revisited. In addition, the storage option could address not only Question 1 (What metadata do I have?), but also Questions 3 (Where is it?) and 5 (How do I get it?).
The metamodel and its associated storage capabilities are related directly to the type of storage option(s) selected. Specifically, choices include one or any combination of the following:
Centralized custom database designed to reflect an integrated, all-encompassing metamodel perspective.
Metadata storage at the source with a main database functioning as the metadata directory or gateway by interpreting each metamodel's addressing and location-specific information.
Distributed metadata storage with separate metamodels and associated metadata instances residing in distinct locations. A master metamodel or search engine would be available to track and locate specific metadata instances (similar to the enterprise portal, which is discussed in Chapter 15).
Centralized repository tool with vendor-supplied metamodels populated either manually or via vendor-supplied interfaces and APIs.
Distributed repository tools, also with vendor-supplied metamodels, but populated in a distributed scenario such that coexistence is planned and integrated.
Let's discuss each option from the perspective of advantages and disadvantages.
Centralized Custom Database
Based on the ease of setup, this solution directly mirrors the results of the metadata requirements analysis. All metamodels are combined, typically at junction points, and each metamodel component represents a table in a relational database implementation. Instances are loaded either via bulk one-time load, with periodic updates, or manually, depending primarily on volume and the frequency of update. Front-ends can be simple client (e.g., VB) access, or in many cases, intranet search engines are placed "on top of" the custom database to allow subject-based queries and access.
In the best of worlds, this custom database is monitored and quality is controlled by an individual assigned to metadata administration. His or her responsibilities include the validation of metadata instances as well as the guarantee of the database's availability. Database design and enhancement also fall within this individual's job description, and the position remains filled even after this individual moves on. The metadata database remains an active part of its beneficiaries' metadata analyses. Figure 11-1 illustrates this scenario.
Figure 11-1 A centralized custom metadata database
In the worst of worlds, this database is soon out of date. In most cases it is established to meet a narrowly scoped initial objective (e.g., one data warehouse, definitions of one OTS package's data) and is typically requested by one specific user community. Despite the fact that the metadata probably exists elsewhere, it is recreated yet again so that the beneficiaries have easy access based on its new, integrated single location. In a short time, typically one year, the metadata database is no longer in demand because of its inaccuracy, and metadata beneficiaries seldom use it. The number of active users dwindles, and those responsible for its initial design and creation move on to newer endeavors.
In the most likely scenario, the metadata database's content and scope are too restrictive. Designed to meet a specific metadata beneficiary's set of requirements, they are usually not flexible enough to expand beyond that initial focus. Because the metamodel is one part of the metadata solution, a direct implementation without the other architectural features is incomplete. As metadata requirements and/or the number and types of metadata beneficiaries expand, the initial implementation loses its leverage and a more flexible, better planned metadata solution typically replaces this custom database within two years. The result is yet another node on the corporate metadata web.
Metadata Storage at the Source
In response to the trend of reducing unnecessary redundancy in both the data as well as the metadata worlds, most metadata solutions are adopting the philosophy of leaving metadata where it is used. In many cases, this metadata is created, updated, and maintained within a specific development or reporting tool, or in some cases, as part of a custom or purchased application package. Here, the metamodels and metadata requirements analysis consider the location of the metadata of record, as previously discussed. In other words, whether an official value exists to correct conflicting metadata instances is usually the key to whether metadata can remain solely where it is.2 In situations where metadata instances conflict based on instance value but not intent or meaning, the metadata solution design must have assigned metadata of records and active maintenance plans tied to the overall architecture. Without such a strategy, the various metadata values and conflicts will eventually sever the overall architecture. Metadata maintenance strategies must coexist with the overall architectural plans.3
By keeping metadata in diverse locations, as illustrated in Figure 11-2, the metadata solution database becomes a metadata directory or gateway. Instead of tracking metadata instances, the database contains the metamodel, depicting metadata interrelationships as well as location specifics (answers to Questions 1, 2, and 5) for unique and specific metamodels. As anticipated, the specifics of the addressing schemes depend on the deployed technology and architecture of each metadata source. In addition, the ability of the deployed metadata database technology to interface with each of the metadata sources puts a major emphasis on the feasibility of the solution.
Figure 11-2 Metadata storage at the source
From a benefit perspective, this type of implementation obviously targets the reduction and eventual elimination of metadata conflict. As metadata instances are requested, typically they are retrieved from their actual source, with the metadata repository functioning as a gateway. As metadata is updated, the latest instances are accessed. There is no need for synchronization among separate metadata stores, because they are all connected via the gateway's common metamodel.
On the downside, technology certainly plays a major role in the feasibility of this solution. Metadata standards are moving toward universal metamodels with associated access routines. Our patience is worth its weight in this scenario. Until standards become universal and easy to plug in, interface capabilities depend on the compatibility of the underlying metadata stores as well as the completeness of API (application programming interface) sets or the maturity of standard intertool exchange mechanisms such as XML.4
Distributed Metadata Storage
In the distributed scenario illustrated in Figure 11-3, metamodels and their metadata instances reside at distinct locations. There is no master metamodel or repository, as there is with the previous option, because the search engines have the ability to scan the contents of each metadata store directly, typically by accessing individual metamodels in order to retrieve the metadata of choice. There is no need to organize or integrate the various metadata stores, but the practicality of this solution depends substantially on the deployed search engine and the existence of the particular engine's required contents in each distinct metadata area.
Figure 11-3 Distributed metadata storage
Distributed metadata storage is becoming more popularly known as an enterprise portal. Although most portals are used to retrieve data, the same concepts apply with metadata retrieval. In this implementation, the search targets remain unchanged, except for some standard portal-wary identification. The search engines become the power behind the practicality of this solution. Despite this apparent ease of implementation, the efficiency of such a setup depends substantially on how well organized each metadata store is in relationship to the others. For example, having the same information in more than one place without forethought as to a logical separation concept guarantees only that the same information, with perhaps different intentions, is retrieved over and over again.
Properly implemented portals require both architectural and metadata instance planning. Without such advanced design, the portal can end up returning lots of unrelated metadata and the user would be stuck with making sense of it all.
Centralized Repository Tool
Repository tools, vendor supplied, offer much more than metadata databases. Their architecture assumes interfaces, and in many cases, full sets of APIs are included as part of the base repository offering. Although their full design and functionality is covered in a later chapter,5 it is necessary to introduce them here as an option in the storage of metadata. Most initial metadata solutions during the 1990s involved repository technology. The amount of functionality that was part of the standard tools varied substantially by vendor, and the latter part of the 1990s involved many vendor acquisitions and mergers so that today's offerings represent distinct architectural variations.
With a centralized repository tool, most installations use the repository's metadata store as the sole integrated metadata area. Initially, other sources of metadata are loaded into the repository, usually through a vendor-supplied batch interface. The means of metadata maintenance varies from installation to installation, but in general, a repository administrator oversees the integrity of the repository's contents. Likewise, some aspect of metadata creation usually involves automated repository update.
Purchased repository tools are often called a "repository in a box" because the vendor provides metamodels along with standard interfaces to and from the populated repository. The supplied functionality comes with a price tag, however, and therefore purchased repository tools are not usually considered for small-scope metadata solutions. Finally, as discussed in Chapter 14, each repository product is typically focused on a specific type of metadata functionality (e.g., data management, application development, or data warehouse support) and therefore is architecturally designed to interface only with products that are targeted to support the same functional market space.
Distributed Repository Tools
When vendor-supplied repository tools are designed to coexist, metadata storage takes on another option. In this scenario, the tools are deployed throughout an organization, typically with each metadata repository representing a subset of the overall enterprise's metadata. The synchronization of these tool instances as well as their participation in the full metadata indexing schema is quite vendor dependent. Again, as with distributed metadata storage, an overall metadata distribution plan is a prerequisite to success in this scenario.
Distributed repository tool implementations are not the same as distributed database implementations. Repository software provides a key part of the metadata-based functionality and must also be functionally distributed.