Today we are excited to announce the launch Deep Learning with R, 2nd Edition. Compared to the first edition, the book is more than a third longer and contains more than 75% new content. It’s not so much an updated edition as an entirely new book.
This book will show you how to get started with deep learning in R, even if you have no background in math or data science. The book covers:
-
Deep learning from first principles
-
Image classification and segmentation
-
Time series forecasting
-
Text classification and machine translation
-
Text generation, neural style transfer and image generation
Only a modest knowledge of R is assumed; everything else is explained from the ground up with examples that clearly demonstrate the mechanics. Read about transitions and back-promotion – using tf$GradientTape()
rediscover the constant gravitational acceleration of the Earth (9.8 \(m/s^2\)). Find out what keras Layer
is – implementing one from scratch using only basic R. Learn the difference between batch normalization and layer normalization layer_lstm()
it does what happens when you call
fit()
and so on—all through implementations in plain R code.
Every section in the book has undergone significant updates. The chapters on computer vision provide a complete guide on how to approach the task of image segmentation. The image classification sections have been updated to use the {tfdatasets} and Keras preprocessing layers, showing not only how to build an efficient and fast data pipeline, but also how to customize it when your dataset requires it.
The chapters on text models have been completely reworked. Learn how to preprocess raw text for deep learning by first implementing a text vectorization layer using only basic R, before using
keras::layer_text_vectorization()
nine different ways. Learn about embedding layers by implementing your own
layer_positional_embedding()
. Learn about the transformation architecture by implementing your own layer_transformer_encoder()
and
layer_transformer_decoder()
. And along the way, put it all together by training text models—first a movie review sentiment classifier, then an English to Spanish translator, and finally a movie review text generator.
Generative models have their own chapter, covering not only text generation, but also variational autoencoders (VAEs), generative adversarial networks (GANs), and style transfer.
Every step of the way, you’ll find sprinkled insights gleaned from experience and empirical observation about what works, what doesn’t, and why. Answers to questions like: when should you use a bag of words instead of a sequential architecture? When is it better to use a pretrained model instead of training a model from scratch? When should you use GRU instead of LSTM? When is it better to use separable convolution instead of regular convolution? When training is unstable, what troubleshooting steps should you take? What can you do to make training faster?
The book eschews magic and hand-waving and instead pulls back the curtain on all the necessary foundational concepts needed to apply deep learning. After studying the material in the book, you will not only know how to apply deep learning to common tasks, but you will also have the context to apply deep learning to new domains and new problems.
Deep Learning with R, Second Edition
Use again
Text and images are licensed under Creative Commons Attribution CC BY 4.0. Figures that have been reused from other sources are not covered by this license and can be identified by the note in their caption: “Image from …”.
Quote
For attribution, please cite this work as
Kalinowski (2022, May 31). Posit AI Blog: Deep Learning with R, 2nd Edition. Retrieved from https://blogs.rstudio.com/tensorflow/posts/2022-05-31-deep-learning-with-R-2e/
BibTeX citation
@misc{kalinowskiDLwR2e, author = {Kalinowski, Tomasz}, title = {Posit AI Blog: Deep Learning with R, 2nd Edition}, url = {https://blogs.rstudio.com/tensorflow/posts/2022-05-31-deep-learning-with-R-2e/}, year = {2022} }