5+ Hours of Video Instruction
An intuitive introduction to processing natural language data with Deep Learning models Deep Learning for Natural Language Processing LiveLessons is an introduction to processing natural language with Deep Learning. These lessons bring intuitive explanations of essential theory to life with interactive, hands-on Jupyter notebook demos. Examples feature Python and Keras, the high-level API for TensorFlow, the most popular Deep Learning library. In the early lessons, specifics of working with natural language data are covered, including how to convert natural language into numerical representations that can be readily processed by machine learning approaches. In the later lessons, state-of-the art Deep Learning architectures are leveraged to make predictions with natural language data.
Learn How To
Who Should Take This Course
These LiveLessons are perfectly-suited to software engineers, data scientists, analysts, and statisticians with an interest in applying Deep Learning to natural language data. Code examples are provided in Python, so familiarity with it or another object-oriented programming language would be helpful.
The author’s earlier Deep Learning with TensorFlow LiveLessons, or equivalent foundational Deep Learning knowledge, are a prerequisite.
Table of Contents
Lesson 1: The Power and Elegance of Deep Learning for Natural Language Processing
This lesson starts off by examining Natural Language Processing and how it has been revolutionized in recent years by Deep Learning approaches. It continues with a little linguistics section that introduces the elements of natural language and breaks down how these elements are represented both by Deep Learning and by traditional machine learning approaches. This is followed up with a tantalizing overview of the broad natural language applications in which Deep Learning has emerged as state-of-the-art. The lesson then reviews how to run the code in these LiveLessons on your own machine, as well as the foundational Deep Learning theory that is essential for building an NLP specialization upon. The lesson wraps up by taking a sneak peek at the capabilities you’ll develop over the course of all five lessons.
Lesson 2 Word Vectors
The lesson begins by illustrating what word vectors are as well as how the beautiful word2vec algorithm creates them. Subsequently, the lesson arms you with a rich set of natural language data sets that you can train powerful Deep Learning models, and then swiftly moves along to leveraging those data to generate word vectors of your own.
Lesson 3 Modeling Natural Language Data
In the previous lesson, you learned about vector-space embeddings and created word vectors with word2vec. In that process, we identified shortcomings of our natural language data, so the current lesson begins with coverage of Natural Language Processing best practices. Next, on the whiteboard, the author works through how to calculate a concise and broadly useful summary metric called the Area Under the Curve of the Receiver Operator Characteristic. We immediately calculate that summary metric in practice by building and evaluating a dense neural network for classifying documents. The lesson then goes a step further by showing you how to add convolutional layers into your deep neural network as well.
Lesson 4 Recurrent Neural Networks
This lesson kicks off by delving into the essential theory of Recurrent Neural Networks, a Deep Learning family that’s ideally suited to handling data that occur in a sequence like languages. You immediately apply this theory by incorporating an RNN into your document classification model. The author then briefly returns to the whiteboard to provide a high-level theoretical overview of especially powerful RNN variants—the Long Short-Term Memory Unit and the Gated Recurrent Unit (4.3)—before incorporating these into your Deep Learning models as well.
Lesson 5 Advanced Models
This lesson expands our natural language modeling capabilities further by examining special cases of the LSTM, namely the Bi-Directional and Stacked varieties. We then take a mind-bending journey into the world of non-sequential network architectures—where instead of only stacking neural layers on top of each other as we’ve always done—we run layers side-by-side in parallel as well. To wrap up these LiveLessons, the author summarizes the hyperparameters that we can consider tuning to optimize model performance.
About Pearson Video Training
Pearson publishes expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. These professional and personal technology videos feature world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, Pearson IT Certification, Prentice Hall, Sams, and Que. Topics include: IT Certification, Network Security, Cisco Technology, Programming, Web Development, Mobile Development, and more. Learn more about Pearson Video training at http://www.informit.com/video.
Lesson 1: The Power and Elegance of Deep Learning for NLP
1.1 Introduction to Deep Learning for Natural Language Processing
1.2 Computational Representations of Natural Language Elements
1.3 NLP Applications
1.4 Installation, Including GPU Considerations
1.5 Review of Prerequisite Deep Learning Theory
1.6 A Sneak Peak
Lesson 2: Word Vectors
2.1 Vector-Space Embedding
2.3 Data Sets for NLP
2.4 Creating Word Vectors with word2vec
Lesson 3: Modeling Natural Language Data
3.1 Best Practices for Preprocessing Natural Language Data
3.2 Dense Neural Network Classification
3.4 Convolutional Neural Network Classification
3.5 Multi-ConvNet Architectures
Lessons 4: Recurrent Neural Networks
4.1 Essential Theory of RNNs
4.2 RNNs in Practice
4.3 Essential Theory of LSTMs and GRUs
4.4 LSTMs and GRUs in Practice
Lesson 5: Sequence-to-Sequence Models
5.1 Bi-Directional LSTMs
5.2 Stacked LSTMs
5.3 Parallel Network Architectures
5.4 Hyperparameter Tuning