Home > Articles > Data

  • Print
  • + Share This
This chapter is from the book

Planning Your Data Model

Before you begin implementing a MongoDB database, you need to understand the nature of the data being stored, how that data will be stored, and how it will be accessed. Understanding these concepts helps you make determinations ahead of time and structure the data and your application for optimal performance.

Specifically, you should ask yourself the following questions:

  • What basic objects will my application be using?
  • What is the relationship between the different object types—one-to-one, one-to-many, or many-to-many?
  • How often will new objects be added to the database?
  • How often will objects be deleted from the database?
  • How often will objects be changed?
  • How often will objects be accessed?
  • How will objects be accessed—by ID, property values, comparisons, or other?
  • How will groups of object types be accessed—common ID, common property value, or other?

When you have the answers to these questions, you are ready to consider the structure of collections and documents inside MongoDB. The following sections discuss different methods of document, collection, and database modeling you can use in MongoDB to optimize data storage and access.

Normalizing Data with Document References

Data normalization is the process of organizing documents and collections to minimize redundancy and dependency. This is done by identifying object properties that are subobjects and that should be stored as a separate document in another collection from the object’s document. Typically, this is useful for objects that have a one-to-many or many-to-many relationship with subobjects.

The advantage of normalizing data is that the database size will be smaller because only a single copy of objects will exist in their own collection instead of being duplicated on multiple objects in single collection. Additionally, if you modify the information in the subobject frequently, then you need to modify only a single instance instead of every record in the object’s collection that has that subobject.

A major disadvantage of normalizing data is that, when looking up user objects that require the normalized subobject, a separate lookup must occur to link the subobject. This can result in a significant performance hit if you are accessing the user data frequently.

An example of when normalizing data makes sense is a system that contains users who have a favorite store. Each User is an object with name, phone, and favoriteStore properties. The favoriteStore property is also a subobject that contains name, street, city, and zip properties.

However, thousands of users might have the same favorite store, so you see a high one-to-many relationship. Therefore, storing the FavoriteStore object data in each User object doesn’t make sense because it would result in thousands of duplications. Instead, the FavoriteStore object should include an _id object property that can be referenced from documents in the user’s stores collection. The application can then use the reference ID favoriteStore to link data from the Users collection to FavoriteStore documents in the FavoriteStores collection.

Figure 1.1 illustrates the structure of the Users and FavoriteStores collections just described.


FIGURE 1.1 Defining normalized MongoDB documents by adding a reference to documents in another collection.

Denormalizing Data with Embedded Documents

Denormalizing data is the process of identifying subobjects of a main object that should be embedded directly into the document of the main object. Typically, this is done on objects that have mostly one-to-one relationships or that are relatively small and do not get updated frequently.

The major advantage of denormalized documents is that you can get the full object back in a single lookup without needing to do additional lookups to combine subobjects from other collections. This is a major performance enhancement. The downside is that, for subobjects with a one-to-many relationship, you are storing a separate copy in each document; this slows insertion a bit and takes up additional disk space.

An example of when normalizing data makes sense is a system that contains users’ home and work contact information. The user is an object represented by a User document with name, home, and work properties. The home and work properties are subobjects that contain phone, street, city, and zip properties.

The home and work properties do not change often for the user. Multiple users might reside in the same home, but this likely will be a small number. In addition, the actual values inside the subobjects are not that big and will not change often. Therefore, storing the home contact information directly in the User object makes sense.

The work property takes a bit more thinking. How many people are you really going to get who have the same work contact information? If the answer is not many, the work object should be embedded with the User object. How often are you querying the User and need the work contact information? If you will do so rarely, you might want to normalize work into its own collection. However, if you will do so frequently or always, you will likely want to embed work with the User object.

Figure 1.2 illustrates the structure of Users with the home and work contact information embedded, as described previously.


FIGURE 1.2 Defining denormalized MongoDB documents by implementing embedded objects inside a document.

Using Capped Collections

A great feature of MongoDB is the capability to create a capped collection. A capped collection is a collection that has a fixed size. When a new document needs to be written to a collection that exceeds the size of the collection, the oldest document in the collection is deleted and the new document is inserted. Capped collections work great for objects that have a high rate of insertion, retrieval, and deletion.

The following list highlights the benefits of using capped collections:

  • Capped collections guarantee that the insert order is preserved. Queries do not need to use an index to return documents in the order they were stored, eliminating the indexing overhead.
  • Capped collections guarantee that the insertion order is identical to the order on disk by prohibiting updates that increase the document size. This eliminates the overhead of relocating and managing the new location of documents.
  • Capped collections automatically remove the oldest documents in the collection. Therefore, you do not need to implement deletion in your application code.

Capped collections do impose the following restrictions:

  • You cannot update documents to a larger size after they have been inserted into the capped collection. You can update them, but the data must be the same size or smaller.
  • You cannot delete documents from a capped collection. The data will take up space on disk even if it is not being used. You can explicitly drop the capped collection, which effectively deletes all entries, but you also need to re-create it to use it again.

A great use of capped collections is as a rolling log of transactions in your system. You can always access the last X number of log entries without needing to explicitly clean up the oldest.

Understanding Atomic Write Operations

Write operations are atomic at the document level in MongoDB. Thus, only one process can be updating a single document or a single collection at the same time. This means that writing to documents that are denormalized is atomic. However, writing to documents that are normalized requires separate write operations to subobjects in other collections; therefore, the write of the normalized object might not be atomic as a whole.

You need to keep atomic writes in mind when designing your documents and collections to ensure that the design fits the needs of the application. In other words, if you absolutely must write all parts of an object as a whole in an atomic manner, you need to design the object in a denormalized way.

Considering Document Growth

When you update a document, you must consider what effect the new data will have on document growth. MongoDB provides some padding in documents to allow for typical growth during an update operation. However, if the update causes the document to grow to a size that exceeds the allocated space on disk, MongoDB must relocate that document to a new location on the disk, incurring a performance hit on the system. Frequent document relocation also can lead to disk fragmentation issues. For example, if a document contains an array and you add enough elements to the array to exceed the space allocated, the object needs to be moved to a new location on disk.

One way to mitigate document growth is to use normalized objects for properties that can grow frequently. For example instead of using an array to store items in a Cart object, you could create a collection for CartItems; then you could store new items that get placed in the cart as new documents in the CartItems collection and reference the user’s cart item within them.

Identifying Indexing, Sharding, and Replication Opportunities

MongoDB provides several mechanisms to optimize performance, scale, and reliability. As you are contemplating your database design, consider the following options:

  • Indexing: Indexes improve performance for frequent queries by building a lookup index that can be easily sorted. The _id property of a collection is automatically indexed on because looking up items by ID is common practice. However, you also need to consider other ways users access data and implement indexes that enhance those lookup methods as well.
  • Sharding: Sharding is the process of slicing up large collections of data among multiple MongoDB servers in a cluster. Each MongoDB server is considered a shard. This provides the benefit of utilizing multiple servers to support a high number of requests to a large system. This approach provides horizontal scaling to your database. You should look at the size of your data and the amount of request that will be accessing it to determine whether to shard your collections and how much to do so.
  • Replication: Replication is the process of duplicating data on multiple MongoDB instances in a cluster. When considering the reliability aspect of your database, you should implement replication to ensure that a backup copy of critical data is always readily available.

Large Collections vs. Large Numbers of Collections

Another important consideration when designing your MongoDB documents and collections is the number of collections the design will result in. Having a large number of collections doesn’t result in a significant performance hit, but having many items in the same collection does. Consider ways to break up your larger collections into more consumable chunks.

An example of this is storing a history of user transactions in the database for past purchases. You recognize that, for these completed purchases, you will never need to look them up together for multiple users. You need them available only for users to look at their own history. If you have thousands of users who have a lot of transactions, storing those histories in a separate collection for each user makes sense.

Deciding on Data Life Cycles

One of the most commonly overlooked aspects of database design is the data life cycle. How long should documents exist in a specific collection? Some collections have documents that should be kept indefinitely (for example, active user accounts). However, keep in mind that each document in the system incurs a performance hit when querying a collection. You should define a Time To Live (TTL) value for documents in each of your collections.

You can implement a TTL mechanism in MongoDB in several ways. One method is to implement code in your application to monitor and clean up old data. Another method is to utilize the MongoDB TTL setting on a collection, to define a profile in which documents are automatically deleted after a certain number of seconds or at a specific clock time.

Another method for keeping collections small when you need only the most recent documents is to implement a capped collection that automatically keeps the size of the collection small.

Considering Data Usability and Performance

The final point to consider—and even reconsider—is data usability and performance. Ultimately, these are the two most important aspects of any web solution and, consequently, the storage behind it.

Data usability describes the capability of the database to satisfy the functionality of the website. You need to make certain first that the data can be accessed so that the website functions correctly. Users will not tolerate a website that does not do what they want it to. This also includes the accuracy of the data.

Then you can consider performance. Your database must deliver the data at a reasonable rate. You can consult the previous sections when evaluating and designing the performance factors for your database.

In some more complex circumstances, you might find it necessary to evaluate data usability, then consider performance, and then look back to usability in a few cycles until you get the balance correct. Also keep in mind that, in today’s world, usability requirements can change at any time. Be sure to design your documents and collections so that they can become more scalable in the future, if necessary.

  • + Share This
  • 🔖 Save To Your Account