Home > Articles > Web Development

The Merb Way: Models

This chapter is from the book

This chapter is from the book

5.3 Properties

Each DataMapper model is able to persist its data. The kind of data it is able to store is defined through its properties. If you’re using a typical database, these properties correlate with the columns of the model’s corresponding table. Below is an example of a DataMapper model with three properties.

class TastyAnimal
  include DataMapper::Resource

  property :id, Serial
  property :name, String
  property :endangered, TrueClass


In many ways, you can think of properties as persistent accessors. In fact, taking a look into the source of the property method (found in the Model resource we spoke about earlier), we find that a dynamic getter and setter are created using class_eval:

def property(name, type, options = {})
  property = Property.new(self, name, type, options)


  # ...

# ...

# defines the getter for the property
def create_property_getter(property)
  class_eval <<-EOS, _ _FILE_ _, _ _LINE_ _
    def #{property.getter}

  # ...


# defines the setter for the property
def create_property_setter(property)
  unless instance_methods.include?("#{property.name}=")
    class_eval <<-EOS, _ _FILE_ _, _ _LINE_ _
      def #{property.name}=(value)
        attribute_set(#{property.name.inspect}, value)

The most important thing to learn from the source shown above is that properties dynamically create getter and setter methods. Additionally, these methods can end up protected or private through visibility attributes. Finally, the getters and setters produced are not exactly equivalent to attr_reader and attr_writer because of their internal use of the methods attribute_get and attribute_set.

Going back to the Resource source, we can find these two methods manipulating the values of model properties, once again located in Model. You’ll have to excuse this volleying back and forth, but the point of the Resource and Model modules is to separate individual resource methods from those related to the model as a whole.

# @api semiplugin
def attribute_get(name)

# @api semipublic
def attribute_set(name, value)
  properties[name].set(self, value)


def properties

You may have noticed the @api semipublic comment above the getter and setter methods. This is because application developers should not ordinarily need to use these methods. Plugin developers, on the other hand, may need to use them as the easiest way to get and set properties while making sure they are persisted.

For application developers, however, this does bring up one important point: Do not use instance variables to set property values. The reason is that while this will set the object’s value, it will unfortunately short-circuit the model code that is used to track whether a property is dirty. In other words, the property value may not persist later upon save. Instead, you should use the actual property method. Below you’ll find an example with comments that should get the point across.

class Fruit
  include DataMapper::Resource

  property :id, Serial
  property :name, String
  property :eaten, TrueClass

  def eat
    unless eaten?
      # will not persist upon save
      @eaten = true

      # will persist upon save
      eaten = true


Before we describe the extended use of properties, let’s take a look at the database side to understand how persistence works.

5.3.1 Database storage

In order to persist the data of model objects, we need to set up our database for that data to be stored. The default-generated configuration files use a SQLite3 database file called sample_development.db. This setup is perfect for most development scenarios given its quickness to get up and running. With that in mind, we’d say stick with it whenever possible, leaving the alteration of config/database.yml for production or staging environments. Automigrating the DB schema

Databases typically need to be prepped for the data they will store during application development. The process by which DataMapper does this is called automigration, because DataMapper uses the properties listed in your models to automatically create your database schema for you. Using the provided Merb DataMapper rake task, we can automigrate the model that we created earlier and then take a peek inside the database to see what was done:

$ rake db:automigrate
$ sqlite3 sample_development.db
sqlite> .tables
sqlite> .schema
CREATE TABLE "tasty_animals"  ("id" INTEGER NOT NULL
  is_endangered" BOOLEAN);

As you can see, a table with a pluralized and snake-cased name was created for our model, TastyAnimal. Remembering the various properties of the model class, we can also spot corresponding columns inside the schema’s CREATE statement. Note that while Ruby classes were used on the property lines, standard SQL types appear in the database.

The code behind automigration is definitely worth studying, so let’s take a look at the module AutoMigrations, which includes itself within the Model module:

module DataMapper
  module AutoMigrations
    def auto_migrate!(repository_name =


    # @api private
    def auto_migrate_down!(repository_name =

     # repository_name ||= default_repository_name
      repository(repository_name) do |r|

    # @api private
    def auto_migrate_up!(repository_name =

      repository(repository_name) do |r|

    def auto_upgrade!(repository_name =

      repository(repository_name) do |r|
        r.adapter.upgrade_model_storage(r, self)

    Model.send(:include, self)

  end # module AutoMigrations
end # module DataMapper

As you can see, there are two API public class methods you can use with models, auto_migrate! and auto_upgrade!. These effectively call the three adapter methods destroy_model_storage, create_model_storage, and upgrade_model_storage. Let’s go deep into the source and see how these three methods do the heavy lifting:

class DataMapper::Adapters::AbstractAdapter
  module Migration

    def upgrade_model_storage(repository, model)
      table_name = model.storage_name(repository.name)

      if success = create_model_storage(repository,

        return model.properties(repository.name)

      properties = []

        each do |property|

        schema_hash = property_schema_hash(repository,

        next if field_exists?(table_name,

        statement = alter_table_add_column_statement(
          table_name, schema_hash)

        properties << property


    def create_model_storage(repository, model)
      return false if storage_exists?(

      execute(create_table_statement(repository, model))
      # ... create indexes


    def destroy_model_storage(repository, model)
      execute(drop_table_statement(repository, model))


The simplest of these, destroy_model_storage, executes a drop table statement. The create_model_storage method, on the other hand, first checks to see if the model storage already exists, returning false if it does or true if it does not, and consequently has the chance to create it. Finally, upgrade_model_storage is the most complicated of the three. It first attempts to create the storage (effectively testing whether it exists or not) and then attempts to add new columns for new properties. This leaves existing data in place and is perfect if you have simply added properties to a column. Lest this appear to be no more than hand waving, let’s dig even deeper into the methods that the AbstractAdapter uses to create the SQL for these statements:

class DataMapper::Adapters::AbstractAdapter

  # immediately following the previous code

  module SQL

    def alter_table_add_column_statement(table_name,

      "ALTER TABLE "+
      "ADD COLUMN "+

    def create_table_statement(repository, model)
    repository_name = repository.name

      statement = <<-EOS.compress_lines

          repository_name).map { |p|

            property_schema_hash(repository, p))

          } * ', '}

      if (key = model.key(repository_name)).any?
        statement << ", PRIMARY KEY(#{ key.map { |p|
        } * ', '})"

      statement << ')'

    def drop_table_statement(repository, model)

    def property_schema_hash(repository, property)
      schema = self.class.type_map[property.type].
        merge(:name => property.field(repository.name))

      if property.primitive == String &&
      schema[:primitive] != 'TEXT'
        schema[:size] = property.length
      elsif property.primitive == BigDecimal ||
      property.primitive == Float
        schema[:precision] = property.precision
        schema[:scale]     = property.scale

      schema[:nullable?] = property.nullable?
      schema[:serial?]   = property.serial?

      if property.default.nil? ||
          unless property.nullable?
        if property.type.respond_to?(:dump)
          schema[:default] = property.type.dump(
            property.default, property)
          schema[:default] = property.default


    def property_schema_statement(schema)
      statement = quote_column_name(schema[:name])
      statement << " #{schema[:primitive]}"

      if schema[:precision] && schema[:scale]
        statement << "(#{[ :precision, :scale ].map {
          |k| quote_column_value(schema[k])
        } * ','})"
      elsif schema[:size]
        statement << "("+

      statement << ' NOT NULL'
        unless schema[:nullable?]
      statement << " DEFAULT " +
        quote_column_value(schema[:default]) if
  include SQL


The first thing you may notice is that the methods are included within a module called SQL and that the module is included immediately after it is closed. The reason for this is that within DataMapper adapters, code is often organized by use, and thus the encapsulation of private methods into a module easily allows for alternating regions of public and then private methods.

Now, turning to the actual methods, we can see that some of them—for instance, drop_table_statement—are just a line of simple SQL. Likewise, alter_table_column_statement is just a single line that outputs add column statements. The create_table_statement, however, is far more complex, relying on various other methods to get its work done. One of these, properties_with_subclasses, pulls up all model properties, including those that are simply keys used with relationships. We’ll go further into properties_with_subclasses later on when we examine model relationships, but for now let’s take a look at the method property_schema_statement, which quotes the property as a column name and then appends its type. It also adds the appropriate SQL for decimals, non-nullables, and default values.

We hope this has brought you deep enough into the inner workings of automigration to both appreciate its design and get a feel for how adapter code handles the production of SQL more generally. But it would also be nice to be able to use some of it practically, and thankfully you can do so. For instance, if you’re in mid-development, you may fire up interactive Merb and use auto_upgrade! on a model to which you’ve added properties:

> Fruit.auto_upgrade!

Likewise, you may want to refresh the data of a model using auto_migrate! in the middle of a test file. Here’s an example we’ve spotted in the wild:

before :each do

5.3.2 Defining properties

Let’s now take a more rigorous look at properties as well as the options we have while defining them. As we’ve seen, each property is defined on its own line by using the method property. This class method is mixed in via the inclusion of DataMapper::Resource. It takes a minimum of two arguments, the first being a symbol that effectively names the property and the second being a class that defines what type of data is to be stored. As we will see soon, an optional hash of arguments may also be passed in. Property types

While abstracting away the differences across database column types, DataMapper has chosen to stay true as much as possible to using Ruby to describe properties types. Below is a list of the various classes supported by the DataMapper core. Note that the inclusion of DataMapper::Resource will include DM in your model class, and that when defining properties, you will not have to use the module prefix DM:: before those that use it.

  • Class—stores a Ruby Class name as a string. Intended for use with inheritance, primarily through the property type DM::Discriminator.
  • String—stores a Ruby String. Default maximum length is 50 characters. Length can be defined by the optional hash key :length.
  • Integer—stores a Ruby Integer. Length can be defined by the optional hash key :length.
  • BigDecimal—stores a Ruby BigDecimal, intended for numbers where decimal exactitude is necessary. Can use the option hash keys :precision and :scale.
  • Float—stores a Ruby Float. Primarily intended for numbers where decimal exactitude is not critical. Can use the two options hash keys :precision and :scale.
  • Date—stores a Ruby Date.
  • DateTime—stores a Ruby DateTime.
  • Time—stores a Ruby Time.
  • Object—allows for the marshaling of a full object into a record. It is serialized into text upon storage and when retrieved is available as the original object.
  • TrueClass—a Boolean that works with any of the values in the array [0, 1, 't', 'f', true, false]. In MySQL it translates down to a tinyint, in PostgreSQL a bool, and in SQLite a boolean.
  • DM::Boolean—an alias of TrueClass. This is around for legacy DataMapper support, simply to provide a more commonly recognized name for the type.
  • Discriminator—stores the model class name as a string. Used for single-table inheritance.
  • DM::Serial—used on the serial ID of a model. Serial IDs are auto-incremented integers that uniquely apply to single records. Alternatively, a property can use the Integer class and set :serial to true. You will nearly always see this type applied to the id property.
  • DM::Text—stores larger textual data and is notably lazy-loaded by default.

You may be interested in knowing how the casting in and out of property values works. For the primitive types, values coming out of the database are cast using the method Property#typecast. Below we see how this methods prunes results, modifying them into what we want in Ruby.

def typecast(value)
  return type.typecast(value, self) if type.respond_to?(:typecast)
  return value if value.kind_of?(primitive) || value.nil?
    if    primitive == TrueClass
      %w[ true 1 t ].include?(value.to_s.downcase)
    elsif primitive == String
    elsif primitive == Float
    elsif primitive == Integer
      value_to_i = value.to_i
      if value_to_i == 0
        value.to_s =~ /^(0x|0b)?0+/ ? 0 : nil
    elsif primitive == BigDecimal
    elsif primitive == DateTime
    elsif primitive == Date
    elsif primitive == Time
    elsif primitive == Class


Custom types, however, are handled by subclasses of an abstract type class called DataMapper::Type. These load and dump data in whatever way they are programmed to do. We’ll see custom types later on when we examine some DataMapper-type plugins, but for now let’s take a look at one of the custom types from the DataMapper core, Serial:

module DataMapper
  module Types
    class Serial < DataMapper::Type
      primitive Integer
      serial true
    end # class Text
  end # module Types
end # module DataMapper

Note its use of the methods primitive and serial, which are defined in the class DataMapper::Type:

class DataMapper:Type
    :accessor, :reader, :writer,
    :lazy, :default, :nullable, :key, :serial, :field,
    :size, :length, :format, :index, :unique_index,
    :check, :ordinal, :auto_validation, :validates,
    :unique, :track, :precision, :scale

  # ...

  class << self

    PROPERTY_OPTIONS.each do |property_option|
      self.class_eval <<-EOS, _ _FILE_ _, _ _LINE_ _
        def #{property_option}(arg = nil)
          return @#{property_option} if arg.nil?

          @#{property_option} = arg

    def primitive(primitive = nil)
      return @primitive if primitive.nil?
      @primitive = primitive

    # ...


From this we can first see that the primitive method sets the type to which the property value should be dumped. The serial method, on the other hand, is an example of the property option, which we’re about to address. Option hash

The third argument that the property method can take is an option hash, which affects various behavioral aspects of the property. For instance, below we’ve specified that a property should default to some value.

class Website
  include DataMapper::Resource

  property :id, Serial
  property :domain, String
  property :color_scheme, String, :default => 'blue'

Here’s a list of the various property options and their uses:

  • :accessor—takes the value :private, :protected, or :public. Sets the access privileges of the property as both a reader and a writer. Defaults to :public.
  • :reader—takes the value :private, :protected, or :public. Sets the access privileges of the property as a reader. Defaults to :public.
  • :writer—takes the value :private, :protected, or :public. Sets the access privileges of the property as a writer. Defaults to :public.
  • :lazy—determines whether the property should be lazy-loaded or not. Lazy-loaded properties are not read from the repository unless they are used. Defaults to false on most properties, but is notably true on DM::Text.
  • :default—sets the default value of the property. Can take any value appropriate for the type.
  • :nullable—if set to true it will disallow a null value for the property. When dm-validations is used this invalidates a model.
  • :key—defines a property as the table key. This allows for natural keys in place of a serial ID. This key can be used as the index on the model class in order to access the record.
  • :serial—sets the property to be auto-incremented as well as to serve as the table key.
  • :field—manually overrides the field name. Best used for legacy repositories.
  • :size—sets the size of the property type.
  • :length—alias of :size.
  • :format—used with the String property type. When used with a dmvalidations format can set a regular expression against which strings must validate.
  • :index—sets the property to be indexed for faster retrieval. If set to a symbol instead of to true, it can be used to create multicolumn indexes.
  • :unique_index—defines a unique index for the property. When used with dmvalidations, new records with nonunique property values are marked invalid. If set to a symbol instead of true, it can be used to create multicolumn indexes.
  • :auto_validation—when used with dm-validations, can be used to turn off autovalidations by using the value true.
  • :track—determines when a property should be tracked for dirtiness. Takes the values :get, :set, :load, and :hash.
  • :precision—sets the number of decimal places allowed for BigDecimal and Float type properties.
  • :scale—sets the number of decimal places after the decimal point for BigDecimal and Float type properties.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.


Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.


If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.


This site is not directed to children under the age of 13.


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020