Using Fuzzy Matching to Search by Sound with Python

• Print
When you're writing code to search a database, you can't rely on all those data entries being spelled correctly. Doug Hellmann, developer at DreamHost and author of The Python Standard Library by Example, reviews available options for searching databases by the sound of the target's name, rather than relying on the entry's accuracy.
From the author of

From the author of 

Searching for a person's name in a database is a unique challenge. Depending on the source and age of the data, you may not be able to count on the spelling of the name being correct, or even the same name being spelled the same way when it appears more than once. Discrepancies between stored data and search terms may be introduced due to personal choice or cultural differences in spellings, homophones, transcription errors, illiteracy, or simply lack of standardized spellings during some time periods. These sorts of problems are especially prevalent in transcriptions of handwritten historical records used by historians, genealogists, and other researchers.

A common way to solve the string-search problem is to look for values that are "close" to the same as the search target. Using a traditional fuzzy match algorithm to compute the closeness of two arbitrary strings is expensive, though, and it isn't appropriate for searching large data sets. A better solution is to compute hash values for entries in the database in advance, and several special hash algorithms have been created for this purpose. These phonetic hash algorithms allow you to compare two words or names based on how they sound, rather than the precise spelling.

Early Efforts: Soundex

One such algorithm is Soundex, developed by Margaret K. Odell and Robert C. Russell in the early 1900s. The Soundex algorithm appears frequently in genealogical contexts because it's associated with the U.S. Census and is specifically designed to encode names. A Soundex hash value is calculated by using the first letter of the name and converting the consonants in the rest of the name to digits by using a simple lookup table. Vowels and duplicate encoded values are dropped, and the result is padded up to—or truncated down to—four characters.

The Fuzzy library includes a Soundex implementation for Python programs:

```#!/usr/bin/env python

import fuzzy

names = [ 'Catherine', 'Katherine', 'Katarina',
'Johnathan', 'Jonathan', 'John',
'Teresa', 'Theresa',
'Smith', 'Smyth',
'Jessica',
'Joshua',
]

soundex = fuzzy.Soundex(4)

for n in names:
print '%-10s' % n, soundex(n)```

The output of show_soundex.py demonstrates that some of the names with similar sounds are encoded with the same hash value, but the results are not ideal:

```\$ python show_soundex.py
Catherine  C365
Katherine  K365
Katarina   K365
Johnathan  J535
Jonathan   J535
John       J500
Teresa     T620
Theresa    T620
Smith      S530
Smyth      S530
Jessica    J200
Joshua     J200```

In this example, the variations Theresa and Teresa both produce the same Soundex hash, but Catherine and Katherine start with a different letter; even though they sound the same, the hash outputs are different. The last two names, Jessica and Joshua, are not related at all but are given the same hash value because the letters J, S, and C all map to the digit 2, and the algorithm removes duplicates. These types of failures illustrate a major shortcoming of Soundex.

Beyond English: NYSIIS

Algorithms developed after Soundex use different encoding schemes, either building on Soundex by tweaking the lookup table or starting from scratch with their own rules. All of them process phonemes differently in an attempt to improve accuracy. For example, in the 1970s, the New York State Identification and Intelligence System (NYSIIS) algorithm was published by Robert L. Taft. NYSIIS was originally used by what is now the New York State Division of Criminal Justice Services to help identify people in their database. It produces better results than Soundex because it takes special care to handle phonemes that occur in European and Hispanic surnames.

```#!/usr/bin/env python

import fuzzy

names = [ 'Catherine', 'Katherine', 'Katarina',
'Johnathan', 'Jonathan', 'John',
'Teresa', 'Theresa',
'Smith', 'Smyth',
'Jessica',
'Joshua',
]

for n in names:
print '%-10s' % n, fuzzy.nysiis(n)```

The output of show_nysiis.py is better than the results from Soundex with our sample data:

```\$ python show_nysiis.py
Catherine  CATARAN
Katherine  CATARAN
Katarina   CATARAN
Johnathan  JANATAN
Jonathan   JANATAN
John       JAN
Teresa     TARAS
Theresa    TARAS
Smith      SNATH
Smyth      SNATH
Jessica    JASAC
Joshua     JAS```

In this case, Catherine, Katherine, and Katarina all map to the same hash value. The incorrect match of Jessica and Joshua is also eliminated because more of the letters from the names are used in the NYSIIS hash values.

A New Approach: Metaphone

Metaphone, published in 1990 by Lawrence Philips, is another algorithm that improves on earlier systems such as Soundex and NYSIIS. The Metaphone algorithm is significantly more complicated than the others because it includes special rules for handling spelling inconsistencies and for looking at combinations of consonants in addition to some vowels. An updated version of the algorithm, called Double Metaphone, goes even further by adding rules for handling some spellings and pronunciations from languages other than English.

```#!/usr/bin/env python

import fuzzy

names = [ 'Catherine', 'Katherine', 'Katarina',
'Johnathan', 'Jonathan', 'John',
'Teresa', 'Theresa',
'Smith', 'Smyth',
'Jessica',
'Joshua',
]

dmetaphone = fuzzy.DMetaphone(4)

for n in names:
print '%-10s' % n, dmetaphone(n)```

In addition to having a broader set of encoding rules, Double Metaphone generates two alternate hashes for each input word. This gives the caller the ability to present search results with two levels of precision. In the results from the sample program, Catherine and Katherine have the same primary hash value. Their secondary hash value is the same as the primary hash for Katarina, finding the match that Soundex didn't, but giving it less weight than the results from NYSIIS implied.

```\$ python show_dmetaphone.py
Catherine  ['K0RN', 'KTRN']
Katherine  ['K0RN', 'KTRN']
Katarina   ['KTRN', None]
Johnathan  ['JN0N', 'ANTN']
Jonathan   ['JN0N', 'ANTN']
John       ['JN', 'AN']
Teresa     ['TRS', None]
Theresa    ['0RS', 'TRS']
Smith      ['SM0', 'XMT']
Smyth      ['SM0', 'XMT']
Joshua     ['JX', 'AX']```

Applying Phonetic Searches

Using phonetic searches in your application is straightforward, but may require adding extensions to the database server or bundling a third-party library with your application. MySQL, PostgreSQL, SQLite, and Microsoft SQL Server all support Soundex through a string function that can be invoked directly in queries. PostgreSQL also includes functions to calculate hashes using the original Metaphone algorithm and Double Metaphone.

Standalone implementations for all of the algorithms also are available for major programming languages such as Python, PHP, Ruby, Perl, C/C++, and Java. These libraries can be used with databases that don't have support for phonetic hash functions built in, such as MongoDB. For example, this script loads a series of names into a database, saving each hash value as a precomputed value to make searching easier later:

```#!/usr/bin/env python

import argparse

import fuzzy
from pymongo import Connection

parser = argparse.ArgumentParser(description='Load names into the database')
args = parser.parse_args()

c = Connection()
db = c.phonetic_search
dmetaphone = fuzzy.DMetaphone()
soundex = fuzzy.Soundex(4)

for n in args.name:
# Compute the hashes. Save soundex
# and nysiis as lists to be consistent
# with dmetaphone return type.
values = {'_id':n,
'name':n,
'soundex':[soundex(n)],
'nysiis':[fuzzy.nysiis(n)],
'dmetaphone':dmetaphone(n),
}
(n, values['soundex'][0], values['nysiis'][0],
values['dmetaphone'])
db.people.update({'_id':n}, values,
False,
)```

Run mongodb_load.py from the command line to save names for retrieval later:

```\$ python mongodb_load.py Jonathan Johnathan Joshua Jessica

\$ python mongodb_load.py Catherine Katherine Katarina

The search program mongodb_search.py lets the user select a hash function and then constructs a MongoDB query to find all names with a hash value matching the input name.

```#!/usr/bin/env python

import argparse

import fuzzy
from pymongo import Connection

ENCODERS = {
'soundex':fuzzy.Soundex(4),
'nysiis':fuzzy.nysiis,
'dmetaphone':fuzzy.DMetaphone(),
}

parser = argparse.ArgumentParser(description='Search for a name in the database')
args = parser.parse_args()

c = Connection()
db = c.phonetic_search

encoded_name = ENCODERS[args.algorithm](args.name)
query = {args.algorithm:encoded_name}

for person in db.people.find(query):
print person['name']```

In some of these sample cases, the extra values in the result set are desirable because they're valid matches. On the other hand, the Soundex search for Joshua returns the unrelated value Jessica again. Although Soundex produces poor results when compared to the other algorithms, it's still used in many cases because it's built into the database server. Its simplicity also means that it's faster than the NYSIIS or Double Metaphone. In situations where the results are good enough, its speed may be a deciding factor in selecting it.

```\$ python mongodb_search.py soundex Katherine
Katherine
Katarina

\$ python mongodb_search.py nysiis Katherine
Catherine
Katherine
Katarina

\$ python mongodb_search.py soundex Joshua
Joshua
Jessica

\$ python mongodb_search.py nysiis Joshua
Joshua```

Final Thoughts

I hope that this article has demonstrated the power that phonetic hash algorithms can add to the search features of your application, and the ease with which you can implement them. Selecting the right algorithm to use will depend on the nature of the data and the types of searches you're performing. If the right algorithm isn't clear from the data available, it may be best to provide an option to let users select an appropriate hash algorithm. Offering the user a choice will provide the most flexibility for experimentation and refining searches, although it does require a little more work on your part to set up the indexes. Many researchers, historians, and genealogists are familiar with the names of the algorithms, if not their implementations, so presenting them as options shouldn't intimidate these users.