Introduction to Mining the Talk: Unlocking the Business Value in Unstructured Information
People are talking about your business every day. Are you listening?
Your customers are talking. They're talking about you to your face and behind your back. They're saying how much they like you, and how much they hate you. They're describing what they wish you would do for them, and what the competition is already doing for them. They are writing emails to you, posting blogs about you, and discussing you endlessly in public forums. Are you listening?
Other businesses and organizations are talking too. Researchers talk about new technologies and approaches you might be interested in. Other businesses describe innovations you could leverage in your products. Your competitors are revealing technical approaches and broadcasting their strategies in various publications. They talk about what they are working on and what they think is important. Are you listening?
Your employees are also talking. They are producing great ideas that are languishing for lack of the right context to apply them. They are looking for the right partners to help them innovate and create the next big thing for your company. They reveal new ways to improve your internal processes and even change the entire vision for your company. Are you listening?
All of this talk is going on out there now, even as you read these pages. And you can listen—if you know how. This book is about how we learned to listen to the talk and to turn it into valuable business insights for our company and for our customers. Now we would like to share that knowledge with you.
A Short Story..."The Contest"
Writing this book has been a project that beckoned for many years. We had started and stopped multiple times. We knew we wanted to write the book, but we had trouble convincing ourselves that anyone would want to read it. At a gut level, we knew that what we were doing was important and unique. However, there were a lot of competing methods and products, with more added every day, and we could not spend all of our time evaluating each of them to determine if our approach was measurably superior. Then, in May 2006, an event happened that in one day demonstrated convincingly that our approach was significantly better than all the other alternatives in our field. The results of this day would energize us to go ahead and complete this book.
It began when a potential client was considering a large unstructured data mining project. Like most companies, they had a huge collection of documents describing customer interactions. They wanted to automatically classify these documents to route them to the correct business process. They questioned whether or not this was even feasible, and if so, how expensive would it be. Rather than invite all the vendors in this space to present proposals, they wanted to understand how effective each technical approach was on their data. To this end, they set up the following "contest."
They took a sample of 5,000 documents that had been scanned and converted to text and divided them manually into 50 categories of around 100 documents each. They then invited seven of the leading vendors with products in this space to spend one week with the data using whatever tools and techniques they wished to model these 50 categories. When they were done, they would be asked to classify another unseen set of 25,000 documents. The different vendors' products would be compared based on speed, accuracy of classification, and ease of use during training. The results would be shared with all concerned.
That was it. The "contest" had no prize. There was no promise of anything more on the client's part after it was over. No money would change hands. Nothing would be published about the incident. There was no guarantee that anything would come of it. I was dead set against participating in this activity for three very good reasons: 1) I thought that the chances it would lead to eventual business were small; 2) I didn't think the problem they were proposing was well formed since we would have no chance to talk to them up front to identify business objectives, and from these to design a set of categories that truly reflected the needs of the business as well as the actual state of the data; and 3) I was already scheduled to be in London that week working with a paying customer.
I explained all of these reasons to Jeff, and he listened patiently and said, "You could get back a day early from London and be there on Friday."
"So I would have one day while the other vendors had five! No way!"
"You won't need more than one day. You'll do it in half a day." I didn't respond to that—I recognize rank flattery when I hear it. Then Jeff said, "I guess you really don't want to do this."
That stopped me a moment. The truth was I did want to do it. I had always been curious to know how our methods stacked up against the competition in an unbiased comparison, and here was an opportunity to find out. "OK. I'll go," I found myself saying.
As planned, I arrived at the designated testing location on Friday morning at 9AM. A representative of the client showed me to an empty cubicle where sat a PC that contained the training data sample. On the way, he questioned me about whether or not I would want to work until late in the day (this was the Friday before Memorial Day weekend). I assured him that this would not be the case. He showed me where on the hard drive the data was located and then left. I installed our software1 on the PC and got to work.
About an hour later, he stopped by to see how I was coming along. "Well, I'm essentially done modeling your data," I said. He laughed, assuming I was making a joke. "No, seriously, take a look." We spent about an hour perusing his data in the tool. I spent some time showing him the strengths and weaknesses of the classification scheme they had set up, showing him exactly which categories were well defined and which were not, and identifying outliers in the training set that might have a negative influence on classifier performance. He was quite impressed.
"So, can you classify the test set now?" he asked.
"Sure, I'll go ahead and start that up." I kicked off the process that classified the 25,000 test documents based on the model generated from the training set categories.
We watched it run together for a few seconds. Then he asked me how long it would take. I tried to calculate in my head how long it should take based on the type of model I was using and the size of the document collection. I prevaricated just long enough before answering. Before I could give my best guess, the classification had completed. It took about one minute.
"So that's it? You're done?" he asked, clearly bemused.
"Yes. We can try some other classification models to see if they do any better, but I think this will probably be the best we can come up with. You seem surprised."
He lowered his voice to barely a whisper. "I shouldn't be telling you this, but most of the other vendors are still here, and some of them still haven't come up with a result. None of them finished in less than three days. You did it all in less than two hours? Is your software really that much better than theirs? How does your accuracy stack up?"
"I don't know for sure," I answered truthfully, "but based on the noise I see in your training set, and the accuracy levels our models predict, I doubt they will do any better than we just did." (Two weeks later, when the results were tabulated for all vendors, our accuracy rate was almost exactly as predicted, and it turned out to be better than any of the other participating vendors.)
"So why is your stuff so much better than theirs?" he asked.
"That's not an easy question to answer. Let's go to lunch, and I'll tell you about it."
What I told the client over lunch is the story of how and why our methodology evolved and what made it unique. I explained to him how every other unstructured mining approach on the market was based on the idea that "the best algorithm wins." In other words, researchers had picked a few sets of "representative" text data, often items culled from news articles or research abstracts, and then each created their own approaches to classifying these sets of articles in the most accurate fashion. They honed the approaches against each other and tuned them to perform with optimum speed and accuracy on one type of unstructured data. Then these algorithms eventually became products, turned loose on a world that looked nothing like the lab environment in which they were optimally designed to succeed.
Our approach was very different. It assumed very little about the kind of unstructured data that would be given as input. It also didn't assume any one "correct" classification scheme, but observed that the classification of these documents might vary depending on the business context. These assumptions about the vast variability inherent in both business data and classification schemes for that data, led us to an approach that was orders of magnitude more flexible and generic than anything else available on the market. It was this flexibility and adaptability that allowed me to go into a new situation and, without ever having seen the data or the classification scheme ahead of time, quickly model the key aspects of the domain and produce an automated classifier of high accuracy and performance.