Home > Articles

  • Print
  • + Share This
This chapter is from the book

Departures App

Now that you have the data files it’s time to start working on the app. Create a new Rails app that uses Postgres as the database:

rails new departures --skip-bundle -d postgresql

You can look in Appendix A, “Ruby and Rails Setup,” for more information on how I generally create and configure a new Rails app. Create a new folder, db/data_files, move the two CSV files in there, and don’t forget to run rake db:create.


Before we dive into the departures, we need to load some of the related data. As you surely know, a flight begins and ends at an airport. We will use the file provided with the data challenge to load our airports.

Generate the Model

Create the Airport model and migration as follows:

rails generate model airport iata:string{4}:uniq airport   city state country lat:float long:float

That will give you an Airport model and also the migration to create the airports table. We specify that we want the length of the iata field to be no more than 4 characters, and we also want to index that field.

Load the Data

There are a lot of airports, but the data file is not very large. We can use Ruby’s built-in CSV library to read and convert the data, and we can use ActiveRecord to create new records. I use the same sort of rake task as in previous chapters, namespaced in the db:seed namespace.

I am generally not a fan of using “long” as an abbreviation for longitude, but that is what was in the data file. For this file and the next I wanted to keep the field names and header names in sync so that I could show the header converter in action. It takes the header field, downcases it, and then symbolizes it. The beauty of that is that you can take the row as you read it in and convert it to a hash that ActiveRecord understands in a create statement. The keys do not need to be converted to symbols for ActiveRecord to understand them, but I like my keys as symbols.

require 'csv'
CSV::Converters[:blank_to_nil] = lambda do |field|
  field && field.empty? ? nil : field

namespace :db do
  namespace :seed do
    desc "Import airport data"
    task :import_airports => :environment do
      if Airport.count == 0
        filename     = Rails.root.join('db', 'data_files', 'airports.csv')
        fixed_quotes = File.read(filename).gsub(/\\"/,'""')
        CSV.parse(fixed_quotes, :headers => true, :header_converters => :symbol, :converters => [:blank_to_nil]) do |row|

The really cool thing here is that the CSV library can understand how to do simple transformations as it reads the data. It can automatically convert fields to integer (any field that Integer() would accept), float (any field Float() accepts), date (Date::parse()), datetime (DateTime::parse()), and any combination of these. You can also create your own in addition to what the standard library offers, which is what we do with :blank_to_nil.

We also needed to clean the data a little before we could feed it into CSV. The CSV library expects quotes that are within strings to be escaped differently than the way they were escaped in the data. CSV will consider a double sequence of the quote character to be an escaped quote.


The carriers.csv file is the list of all the airlines. There are a lot of airlines, but the CSV file is not too large. This airline file also works nicely with the departure data.

Generate the Model

This model is a lot simpler with only two fields. We do the same sort of thing that we did for Airports:

rails g model carrier code:string{7}:uniq description

Load the Data

The carrier file is also not a very large file, so we can use CSV and ActiveRecord again to load the data.

desc "Import airline/carrier data"
task :import_carriers => :environment do
  if Carrier.count == 0
    filename = Rails.root.join('db', 'data_files', 'carriers.csv')
    CSV.foreach(filename, :headers => true, :header_converters => :symbol) do |row|

We can run the migration and rake task together:

bundle exec rake db:migrate db:seed:import_carriers


The last file you need to grab, if you haven’t already, is the 1999 departures data (http://stat-computing.org/dataexpo/2009/the-data.html). The DataExpo site gives a nice database schema for a SQLite database. Even though we are using Postgres, we can still use that for guidance. Unfortunately, the data is not clean. The Airline IATA code (UniqueCarrier) is sometimes too long for the field. Rather than modify the data, we will make the field long enough to support the data.

Generate the Model

There are a lot of fields, and there is a lot of data. We are going to need to do things a little differently here. We will create the model and migration using this generator:

rails g model departure year:integer:index month:integer   day_of_month:integer day_of_week:integer dep_time:integer   crs_dep_time:integer arr_time:integer crs_arr_time:integer   unique_carrier:string{6}:index flight_num:integer   tail_num:string{8} actual_elapsed_time:integer   crs_elapsed_time:integer air_time:integer arr_delay:integer   dep_delay:integer origin:string{3}:index dest:string{3}:index   distance:integer taxi_in:integer taxi_out:integer   cancelled:boolean:index cancellation_code:string{1}   diverted:boolean carrier_delay:integer weather_delay:integer   nas_delay:integer security_delay:integer   late_aircraft_delay:integer

Load the Data

This is a large data file, so we are going to optimize the load and use raw SQL instead of ActiveRecord to create these records.

Taking a note from Sandi Metz in Practical Object-Oriented Design in Ruby (POODR), I’ve abstracted out the data sanitation to a module. I put this in a file called db_sanitize.rb in the lib directory:

module DBSanitize
  def string(value)
    value.gsub!(/'/, '') unless value.nil?
    value.nil? ? 'NULL' : "'#{value}'"
  def integer(value)
    value.nil? ? 'NULL' : Integer(value)
  def boolean(value)
    value == '1'

I like this a lot better. Note that this code is not automatically loaded, so you have to require the file before including it. I put this at the top of the rake file under the CSV require statement:

require 'db_sanitize'
include DBSanitize

Now we can sanitize (clean) our data as we read it. I tested bulk insert versus inserting one record at a time with this data. I was surprised to see that the bulk insert did not save any time. Here is the rake task to import the departure data:

CSV::Converters[:na_to_nil] = lambda do |field|
  field && field == "NA" ? nil : field
desc "Import flight departures data"
task :import_departures => :environment do
  if Departure.count == 0
    filename  = Rails.root.join('db', 'data_files', '1999.csv')
    timestamp = Time.now.to_s(:db)
      :headers           => true,
      :header_converters => :symbol,
      :converters        => [:na_to_nil]
    ) do |row|
      puts "#{$.} #{Time.now}" if $. % 10000 == 0
      data = {
        :year                => integer(row[:year]),
        :month               => integer(row[:month]),
        :day_of_month        => integer(row[:dayofmonth]),
        :day_of_week         => integer(row[:dayofweek]),
        :dep_time            => integer(row[:deptime]),
        :crs_dep_time        => integer(row[:crsdeptime]),
        :arr_time            => integer(row[:arrtime]),
        :crs_arr_time        => integer(row[:crsarrtime]),
        :unique_carrier      => string(row[:uniquecarrier]),
        :flight_num          => integer(row[:flightnum]),
        :tail_num            => string(row[:tailnum]),
        :actual_elapsed_time => integer(row[:actualelapsedtime]),
        :crs_elapsed_time    => integer(row[:crselapsedtime]),
        :air_time            => integer(row[:airtime]),
        :arr_delay           => integer(row[:arrdelay]),
        :dep_delay           => integer(row[:depdelay]),
        :origin              => string(row[:origin]),
        :dest                => string(row[:dest]),
        :distance            => integer(row[:distance]),
        :taxi_in             => integer(row[:taxiin]),
        :taxi_out            => integer(row[:taxiout]),
        :cancelled           => boolean(row[:cancelled]),
        :cancellation_code   => string(row[:cancellationcode]),
        :diverted            => boolean(row[:diverted]),
        :carrier_delay       => integer(row[:carrierdelay]),
        :weather_delay       => integer(row[:weatherdelay]),
        :nas_delay           => integer(row[:nasdelay]),
        :security_delay      => integer(row[:securitydelay]),
        :late_aircraft_delay => integer(row[:lateaircraftdelay]),
        :created_at          => string(timestamp),
        :updated_at          => string(timestamp)
      sql = "INSERT INTO departures (#{data.keys.join(',')})"
      sql += " VALUES (#{data.values.join(',')})"

Run the migration and rake task (bundle exec rake db:migrate db:seed:import_departures). The departures data took about a half hour to load on my computer. Maybe (hopefully) yours is faster than mine. Alternately I created a dump file using pg_dump that you can load a little quicker:

pg_restore -v -d departures_development -j3 db/data_files/departures.

I feel inclined to point out, as a matter of perspective, that these immense file loads are not something that you would do in production very often. In the “real world” you’d be accumulating these data gradually over time. In essence we are playing catch-up with those production apps.

Foreign Keys

Now that we have the data loaded there is one more thing I want to do with the departures table. We can use the Rails generator to create a migration (rails g migration AddForeignKeysToDepartures). Here are contents of the change method:

add_foreign_key :departures, :carriers, :column => :unique_carrier, :primary_key => :code
add_foreign_key :departures, :airports, :column => :origin, :primary_key => :iata
add_foreign_key :departures, :airports, :column => :dest, :primary_key => :iata

Why did we do this when we add the relationships in the models with belongs_to and has_many, you ask? Doing those things is definitely a good idea. However, neither of them are absolute safeguards. To truly enforce referential integrity, you have to actually enforce referential integrity. We could have done this before loading the data, but checking every record on insert makes data loads take a lot longer. Removing foreign keys and indexes for a large data load is a common strategy. Just remember to add them back!

The other thing to note is that the records for those foreign keys must already exist. That is why we loaded the airlines and carriers first.

  • + Share This
  • 🔖 Save To Your Account