This eBook includes the following formats, accessible from your Account page after purchase:
EPUB The open industry format known for its reflowable content and usability on supported mobile devices.
MOBI The eBook format compatible with the Amazon Kindle and Amazon Kindle applications.
PDF The popular standard, used most often with the free Adobe® Reader® software.
This eBook requires no passwords or activation to read. We customize your eBook by discreetly watermarking it with your name, making it uniquely yours.
Also available in other formats.
Register your product to gain access to bonus material or receive a coupon.
NVIDIA's Full-Color Guide to Deep Learning with TensorFlow: All You Need to Get Started and Get Results
Deep learning is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to deep learning with TensorFlow, the #1 Python library for building these breakthrough applications. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience.
After introducing the essential building blocks of deep neural networks, Magnus Ekman shows how to use fully connected feedforward networks and convolutional networks to solve real problems, such as predicting housing prices or classifying images. You'll learn how to represent words from a natural language, capture semantics, and develop a working natural language translator. With that foundation in place, Ekman then guides you through building a system that inputs images and describes them in natural language.
Throughout, Ekman provides concise, well-annotated code examples using TensorFlow and the Keras API. (For comparison and easy migration between frameworks, complementary PyTorch examples are provided online.) He concludes by previewing trends in deep learning, exploring important ethical issues, and providing resources for further learning.
About the Author
Chapter 1: The Rosenblatt Perceptron
Chapter 2: Gradient-Based Learning
Chapter 3: Sigmoid Neurons and Backpropagation
Chapter 4: Fully Connected Networks Applied to Multiclass Classification
Chapter 5: Toward DL: Frameworks and Network Tweaks
Chapter 6: Fully Connected Networks Applied to Regression
Chapter 7: Convolutional Neural Networks Applied to Image Classification
Chapter 8: Deeper CNNs and Pretrained Models
Chapter 9: Predicting Time Sequences with Recurrent Neural Networks
Chapter 10: Long Short-Term Memory
Chapter 11: Text Autocompletion with LSTM and Beam Search
Chapter 12: Neural Language Models and Word Embeddings
Chapter 13: Word Embeddings from word2vec and GloVe
Chapter 14: Sequence-to-Sequence Networks and Natural Language Translation
Chapter 15: Attention and the Transformer
Chapter 16: One-to-Many Network for Image Captioning
Chapter 17: Medley of Additional Topics
Chapter 18: Summary and Next Steps
Appendix A: Linear Regression and Linear Classifiers
Appendix B: Object Detection and Segmentation
Appendix C: Word embeddings Beyond word2vec and GloVe
Appendix D: GPT, BERT, and RoBERTa
Appendix E: Newton-Raphson versus Gradient Descent
Appendix F: Matrix Implementation of Digit Classification Network
Appendix G: Relating Convolutional Layers to Mathematical Convolution
Appendix H: Gated Recurrent Units
Appendix I: Setting Up a Development Environment
Appendix J: Cheat Sheets