Home > Articles

Using AI to Interact with User-Generated Content

  • Print
  • + Share This

Covers recommendation engines and natural-language processing (NLP), both from a high level and a coding level. It also gives examples of how to use frameworks, such as the Python-based recommendation engine, Surprise, as well as instructions for how to build your own.

Save 35% off the list price* of the related book or multi-format eBook (EPUB + MOBI + PDF) with discount code ARTICLE.
* See informit.com/terms

This chapter is from the book
  • The man who can drive himself further once the effort gets painful is the man who will win.

    Roger Bannister

What do Russian trolls, Facebook, and U.S. elections have to do with ML? Recommendation engines are at the heart of the central feedback loop of social networks and the user-generated content (UGC) they create. Users join the network and are recommended users and content with which to engage. Recommendation engines can be gamed because they amplify the effects of thought bubbles. The 2016 U.S. presidential election showed how important it is to understand how recommendation engines work and the limitations and strengths they offer.

AI-based systems aren’t a panacea that only creates good things; rather, they offer a set of capabilities. It can be incredibly useful to get an appropriate product recommendation on a shopping site, but it can be equally frustrating to get recommended content that later turns out to be fake (perhaps generated by a foreign power motivated to sow discord in your country).

This chapter covers recommendation engines and natural-language processing (NLP), both from a high level and a coding level. It also gives examples of how to use frameworks, such as the Python-based recommendation engine, Surprise, as well as instructions how to build your own. Some of the topics covered including the Netflix prize, singular-value decomposition (SVD), collaborative filtering, real-world problems with recommendation engines, NLP, and production sentiment analysis using cloud APIs.

The Netflix Prize Wasn’t Implemented in Production

Before “data science” was a common term and Kaggle was around, the Netflix prize caught the world by storm. The Netflix prize was a contest created to improve the recommendation of new movies. Many of the original ideas from the contest later turned into inspiration for other companies and products. Creating a $1 million data science contest back in 2006 sparked excitement that would foreshadow the current age of AI. In 2006, ironically, the age of cloud computing also began, with the launch of Amazon EC2.

The cloud and the dawn of widespread AI have been intertwined. Netflix also has been one of the biggest users of the public cloud via AWS. Despite all these interesting historical footnotes, the Netflix prize-winning algorithm was never implemented into production. The winners in 2009, the “BellKor’s Pragmatic Chaos” team, achieved a greater than 10-percent improvement with a Test RMS of 0.867 (https://netflixprize.com/index.html). The team’s paper (https://www.netflixprize.com/assets/ProgressPrize2008_BellKor.pdf) describes that the solution is a linear blend of over 100 results. A quote in the paper that is particularly relevant is “A lesson here is that having lots of models is useful for the incremental results needed to win competitions, but practically, excellent systems can be built with just a few well-selected models.”

The winning approach for the Netflix competition was not implemented in production at Netflix because the engineering complexity was deemed too great when compared with the gains produced. A core algorithm used in recommendations, SVD, as noted in “Fast SVD for Large-Scale Matrices” (http://sysrun.haifa.il.ibm.com/hrl/bigml/files/Holmes.pdf), “though feasible for small datasets or offline processing, many modern applications involve real-time learning and/or massive dataset dimensionality and size.” In practice, this is one of huge challenges of production ML—the time and computational resources necessary to produce results.

I had a similar experience building recommendation engines at companies. When an algorithm is run in a batch manner, and it is simple, it can generate useful recommendations. But if a more complex approach is taken, or if the requirements go from batch to real time, the complexity of putting it into production and/or maintaining it explodes. The lesson here is that simpler is better: choosing to do batch-based ML versus real-time. Or choosing a simple model versus an ensemble of multiple techniques. Also, deciding whether it may make sense to call a recommendation engine API versus creating the solution yourself.

  • + Share This
  • 🔖 Save To Your Account