TIENE EN SU CESTA DE LA COMPRA
en total 0,00 €
en total 0,00 €
Grokking Deep Learning teaches you to build deep learning neural networks from scratch! In his engaging style, seasoned deep learning expert Andrew Trask shows you the science under the hood, so you grok for yourself every detail of training neural networks.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
About the Technology
Deep learning, a branch of artificial intelligence, teaches computers to learn by using neural networks, technology inspired by the human brain. Online text translation, self-driving cars, personalized product recommendations, and virtual voice assistants are just a few of the exciting modern advancements possible thanks to deep learning.
About the Book
Grokking Deep Learning teaches you to build deep learning neural networks from scratch! In his engaging style, seasoned deep learning expert Andrew Trask shows you the science under the hood, so you grok for yourself every detail of training neural networks. Using only Python and its math-supporting library, NumPy, you´ll train your own neural networks to see and understand images, translate text into different languages, and even write like Shakespeare! When you´re done, you´ll be fully prepared to move on to mastering deep learning frameworks.
The science behind deep learning
Building and training your own neural networks
Privacy concepts, including federated learning
Tips for continuing your pursuit of deep learning
About the Reader
For readers with high school-level math and intermediate programming skills.
About the Author
Andrew Trask is a PhD student at Oxford University and a research scientist at DeepMind. Previously, Andrew was a researcher and analytics product manager at Digital Reasoning, where he trained the world´s largest artificial neural network and helped guide the analytics roadmap for the Synthesys cognitive computing platform.
Table of Contents
Introducing deep learning: why you should learn it
Fundamental concepts: how do machines learn?
Introduction to neural prediction: forward propagation
Introduction to neural learning: gradient descent
Learning multiple weights at a time: generalizing gradient descent
Building your first deep neural network: introduction to backpropagation
How to picture neural networks: in your head and on paper
Learning signal and ignoring noise: introduction to regularization and batching
Modeling probabilities and nonlinearities: activation functions
Neural learning about edges and corners: intro to convolutional neural networks
Neural networks that understand language: king - man + woman == ?
Neural networks that write like Shakespeare: recurrent layers for variable-length data
Introducing automatic optimization: let´s build a deep learning framework
Learning to write like Shakespeare: long short-term memory
Deep learning on unseen data: introducing federated learning
Where to go from here: a brief guide