lazyprogrammer PDF: 1 to 10 of 14 results fetched - page 1 [an]

Artificial Intelligence: Reinforcement Learning in Python: Complete guide to artificial intelligence and machine learning, prep for deep reinforcement learning

https://www.amazon.com/Artificial-Intelligence-Reinforcement...
When people talk about artificial intelligence, they usually don’t mean supervised and unsupervised machine learning.These tasks are pretty trivial compared to what we think of AIs doing - playing chess and Go, driving cars, and beating video games at a superhuman level.Reinforcement learning has recently become popular for doing all of that and more.Much like deep learning, a lot of the theory was discovered in the 70s and 80s but it hasn’t been until recently that we’ve been able to observe first hand the amazing results that are possible.In 2016 we saw AlphaGo beat the world Champion in Go.We saw AIs playing video games like Doom and Super Mario.Self-driving cars have started driving on real roads with other drivers and even carrying passengers, all without human assistance.If that sounds amazing, brace yourself for the future because the law of accelerating returns dictates that this progress is only going to continue to increase exponentially.Yet learning about supervised and unsupervised machine learning is no small feat. To date I have over 16 courses just on those topics alone.And still reinforcement learning opens up a whole new world. As you’ll learn in this book, the reinforcement learning paradigm is more different from supervised and unsupervised learning than they are from each other.It’s led to new and amazing insights both in behavioral psychology and neuroscience. As you’ll learn in this course, there are many analogous processes when it comes to teaching an agent and teaching an animal or even a human. It’s the closest thing we have so far to a true general artificial intelligence.What’s covered in this course?The multi-armed bandit problem and the explore-exploit dilemmaWays to calculate means and moving averages and their relationship to stochastic gradient descentMarkov Decision Processes (MDPs)Dynamic ProgrammingMonte CarloTemporal Difference (TD) LearningApproximation Methods (i.e. how to plug in a deep neural network or other differentiable model into your RL algorithm)If you’re ready to take on a brand new challenge, and learn about AI techniques that you’ve never seen before in traditional supervised machine learning, unsupervised machine learning, or even deep learning, then this course is for you.HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:CalculusProbabilityObject-oriented programmingPython coding: if/else, loops, lists, dicts, setsNumpy coding: matrix and vector operationsLinear regressionGradient descent
Publication date: 03/02/2017
Kindle book details: Kindle Edition, 212 pages

Deep Learning: Natural Language Processing in Python with Word2Vec: Word2Vec and Word Embeddings in Python and Theano (Deep Learning and Natural Language Processing Book 1)

https://www.amazon.com/Deep-Learning-Language-Processing-Emb...
Word2VecWord2Vec is a set neural network algorithms that have gotten a lot of attention in recent years as part of the re-emergence of deep learning in AI.The idea that one can represent words and concepts as vectors is not new. The ability to do it effectively and generate noteworthy results is.Word2Vec algorithms are especially interesting because they allow us to perform arithmetic on the word vectors that yield both surprising and satisfying results. We call these “word analogies”.Some popular word analogies Word2Vec is capable of finding:“King” is to “Man” as “Queen” is to “Woman”.“France” is to “Paris” as “Italy” is to “Rome”.“December” is to “November” as “July” is to “June”.Not only can we cluster similar words together, we can make all these clusters have the same “structure”, all by using Word2Vec.Word2Vec was created by a team led by Tomas Mikolov at Google and has many advantages over earlier algorithms that attempt to do similar things, like Latent Semantic Analysis (LSA) or Latent Semantic Indexing (LSI).In this book we cover various popular flavors of the Word2Vec algorithm, including CBOW (continuous bag-of-words), skip-gram, and negative sampling.I show you both their derivations in math (you’ll see that if you already are familiar with deep learning concepts, there is no new math to be learned), and how to implement them in code.Whereas implementation in Numpy is just the straightforward application of the equations in code, Theano is a bit more complex because it requires new array-slicing techniques, namely running gradient descent on only a part of a matrix. It’s not straightforward, but I walk you through all the bits and pieces required to understand the full implementation.Amazingly, all the technologies we discuss in this book can be downloaded and installed for FREE. That means all you need to invest after purchasing this book is your effort and your time. The only prerequisites are that you are comfortable with Python , Numpy, and Theano coding and you know the basics of deep learning.“Hold up... what’s deep learning and all this other crazy stuff you’re talking about?”If you are completely new to deep learning, you might want to check out my earlier books and courses on the subject, since they are required in order to understand this book:Deep Learning in Python https://www.udemy.com/data-science-deep-learning-in-pythonDeep Learning in Python Prerequisities https://www.udemy.com/data-science-logistic-regression-in-pythonMuch like how IBM’s Deep Blue beat world champion chess player Garry Kasparov in 1996, Google’s AlphaGo recently made headlines when it beat world champion Lee Sedol in March 2016.What was amazing about this win was that experts in the field didn’t think it would happen for another 10 years. The search space of Go is much larger than that of chess, meaning that existing techniques for playing games with artificial intelligence were infeasible. Deep learning was the technique that enabled AlphaGo to correctly predict the outcome of its moves and defeat the world champion.Deep learning progress has accelerated in recent years due to more processing power (see: Tensor Processing Unit or TPU), larger datasets, and new algorithms like the ones discussed in this book.Book 2 in the series can be found at: https://www.amazon.com/dp/B01KRBOO4YBook 3 in the series can be found at: https://www.amazon.com/dp/B01KS5AEXO
Publication date: 08/19/2016
Kindle book details: Kindle Edition, 47 pages

Unsupervised Machine Learning in Python: Master Data Science and Machine Learning with Cluster Analysis, Gaussian Mixture Models, and Principal Components Analysis

https://www.amazon.com/Unsupervised-Machine-Learning-Python-...
In a real-world environment, you can imagine that a robot or an artificial intelligence won’t always have access to the optimal answer, or maybe there isn’t an optimal correct answer. You’d want that robot to be able to explore the world on its own, and learn things just by looking for patterns.Think about the large amounts of data being collected today, by the likes of the NSA, Google, and other organizations. No human could possibly sift through all that data manually. It was reported recently in the Washington Post and Wall Street Journal that the National Security Agency collects so much surveillance data, it is no longer effective.Could automated pattern discovery solve this problem?Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?Kaggle always seems to provide us with a nice CSV, complete with Xs and corresponding Ys.If you haven’t been involved in acquiring data yourself, you might not have thought about this, but someone has to make this data!A lot of the time this involves manual labor. Sometimes, you don’t have access to the correct information or it is infeasible or costly to acquire.You still want to have some idea of the structure of the data.This is where unsupervised machine learning comes into play.In this book we are first going to talk about clustering. This is where instead of training on labels, we try to create our own labels. We’ll do this by grouping together data that looks alike.The 2 methods of clustering we’ll talk about: k-means clustering and hierarchical clustering.Next, because in machine learning we like to talk about probability distributions, we’ll go into Gaussian mixture models and kernel density estimation, where we talk about how to learn the probability distribution of a set of data.One interesting fact is that under certain conditions, Gaussian mixture models and k-means clustering are exactly the same! We’ll prove how this is the case.Lastly, we’ll look at the theory behind principal components analysis or PCA. PCA has many useful applications: visualization, dimensionality reduction, denoising, and de-correlation. You will see how it allows us to take a different perspective on latent variables, which first appear when we talk about k-means clustering and GMMs.All the algorithms we’ll talk about in this course are staples in machine learning and data science, so if you want to know how to automatically find patterns in your data with data mining and pattern extraction, without needing someone to put in manual work to label that data, then this book is for you.All of the materials required to follow along in this book are free: You just need to able to download and install Python, Numpy, Scipy, Matplotlib, and Sci-kit Learn.
Publication date: 05/22/2016
Kindle book details: Kindle Edition, 38 pages

Convolutional Neural Networks in Python: Master Data Science and Machine Learning with Modern Deep Learning in Python, Theano, and TensorFlow (Machine Learning in Python)

https://www.amazon.com/Convolutional-Neural-Networks-Python-...
This is the 3rd part in my Data Science and Machine Learning series on Deep Learning in Python. At this point, you already know a lot about neural networks and deep learning, including not just the basics like backpropagation, but how to improve it using modern techniques like momentum and adaptive learning rates. You've already written deep neural networks in Theano and TensorFlow, and you know how to run code using the GPU.This book is all about how to use deep learning for computer vision using convolutional neural networks. These are the state of the art when it comes to image classification and they beat vanilla deep networks at tasks like MNIST.In this course we are going to up the ante and look at the StreetView House Number (SVHN) dataset - which uses larger color images at various angles - so things are going to get tougher both computationally and in terms of the difficulty of the classification task. But we will show that convolutional neural networks, or CNNs, are capable of handling the challenge!Because convolution is such a central part of this type of neural network, we are going to go in-depth on this topic. It has more applications than you might imagine, such as modeling artificial organs like the pancreas and the heart. I'm going to show you how to build convolutional filters that can be applied to audio, like the echo effect, and I'm going to show you how to build filters for image effects, like the Gaussian blur and edge detection.After describing the architecture of a convolutional neural network, we will jump straight into code, and I will show you how to extend the deep neural networks we built last time with just a few new functions to turn them into CNNs. We will then test their performance and show how convolutional neural networks written in both Theano and TensorFlow can outperform the accuracy of a plain neural network on the StreetView House Number dataset.All the materials used in this book are FREE. You can download and install Python, Numpy, Scipy, Theano, and TensorFlow with pip or easy_install.Lastly, my goal is to show you that convolutional networks aren’t magical and they don’t require expert-level math to figure out.It’s just the same thing we had with regular neural networks:y = softmax( relu(X.dot(W1).dot(W2) )Except we replace the first “dot product” with a convolution:y = softmax( relu(conv(X, W1)).dot(W2) )The way they are trained is exactly the same as before, so all your skills with backpropagation, etc. carry over.
Publication date: 05/15/2016
Kindle book details: Kindle Edition, 41 pages

Markov Models: Master Data Science and Unsupervised Machine Learning in Python

https://www.amazon.com/Markov-Models-Science-Unsupervised-Le...
Markov Models are all about learning sequences.A lot of the data that would be very useful for us to model is in sequences. Stock prices are sequences of prices. Language is a sequence of words. Credit scoring involves sequences of borrowing and repaying money, and we can use those sequences to predict whether or not you’re going to default. In short, sequences are everywhere.The easiest way to appreciate the kind of information you get from a sequence is to consider what you are reading right now. If I had printed the previous sentence backwards, it wouldn’t make much sense to you, even though it contained all the same words. So order is important.While the current fad in deep learning is to use recurrent neural networks (LSTM) to model sequences, I want to first introduce you guys to a machine learning algorithm that has been around for several decades now - the Markov Model.This book follows directly from my first course in Unsupervised Machine Learning for Cluster Analysis, where you learned how to measure the probability distribution of a random variable. In this course, you’ll learn to measure the probability distribution of a sequence of random variables. This course is also going to go through the many practical applications of Markov models. We’re going to look at a model of sickness and health, and calculate how to predict how long you’ll stay sick, if you get sick. We’re going to talk about how Markov models can be used to analyze how people interact with your website, and fix problem areas like high bounce rate, which could be affecting your SEO. We’ll build language models that can be used to identify a writer and even generate text - imagine a machine doing your writing for you.We’ll look at what is possibly the most recent and prolific application of Markov models - Google’s PageRank algorithm. It is surprising that the world's leading search engine could have made its money on what is essentially an undergraduate linear algebra problem.If you think Markov models aren't relevant to your life, think again. Even smartphone autosuggestions can be programmed using Markov models.Amazingly, all the technologies we discuss in this book can be downloaded and installed for FREE. That means all you need to invest after purchasing this book is your effort and your time. The only prerequisites are that you are comfortable with Python and the Numpy stack and you know the basics of probability.
Publication date: 09/24/2016
Kindle book details: Kindle Edition, 34 pages

Deep Learning in Python Prerequisites: Master Data Science and Machine Learning with Linear Regression and Logistic Regression in Python (Machine Learning in Python)

https://www.amazon.com/Deep-Learning-Python-Prerequisites-Re...
Do you find deep learning difficult?So you want to learn about deep learning and neural networks, but you don’t have a clue what machine learning even is. This book is for you.Perhaps you’ve already tried to read some tutorials about deep learning, and were just left scratching your head because you did not understand any of it. This book is for you.Believe the hype. Deep learning is making waves. At the time of this writing (March 2016), Google’s AlghaGo program just beat 9-dan professional Go player Lee Sedol at the game of Go, a Chinese board game.Experts in the field of Artificial Intelligence thought we were 10 years away from achieving a victory against a top professional Go player, but progress seems to have accelerated!While deep learning is a complex subject, it is not any more difficult to learn than any other machine learning algorithm. I wrote this book to introduce you to the prerequisites of neural networks, so that learning about neural networks in the future will seem like a natural extension of these topics. You will get along fine with undergraduate-level math and programming skill.All the materials in this book can be downloaded and installed for free. We will use the Python programming language, along with the numerical computing library Numpy.Unlike other machine learning algorithms, deep learning is particularly powerful because it automatically learns features. That means you don’t need to spend your time trying to come up with and test “kernels” or “interaction effects” - something only statisticians love to do. Instead, we will eventually let the neural network learn these things for us. Each layer of the neural network is made up of logistic regression units.Do you want a gentle introduction to this “dark art”, with practical code examples that you can try right away and apply to your own data? Then this book is for you.This book was designed to contain all the prerequisite information you need for my next book, Deep Learning in Python: Master Data Science and Machine Learning with Modern Neural Networks written in Python, Theano, and TensorFlow.There are many techniques that you should be comfortable with before diving into deep learning. For example, the “backpropagation” algorithm is just gradient descent, which is the same technique that is used to solve logistic regression.The error functions and output functions of a neural network are exactly the same as those used in linear regression and logistic regression. The training process is nearly identical. Thus, learning about linear regression and logistic regression before you embark on your deep learning journey will make things much, much simpler for you.Required resourcesFollowing this book does not require any external materials. Everything needed (Python, and some Python libraries) can be obtained for free.
Publication date: 03/19/2016
Kindle book details: Kindle Edition, 31 pages

SQL for Marketers: Dominate data analytics, data science, and big data (Data Science and Machine Learning in Python)

https://www.amazon.com/SQL-Marketers-Dominate-analytics-Lear...
Be data-drivenMore and more companies these days are learning that they need to make DATA-DRIVEN decisions.With big data and data science on the rise, we have more data than we know what to do with.One of the basic languages of data analytics is SQL, which is used for many popular databases including MySQL, Postgres, Microsoft SQL Server, Oracle, and even big data solutions like Hive and Cassandra.I’m going to let you in on a little secret. Most high-level marketers and product managers at big tech companies know how to manipulate data to gain important insights. No longer do you have to wait around the entire day for some software engineer to answer your questions - now you can find the answers directly, by yourself, using SQL!Your existing solutions are inefficientAre you tired of depending on crufty “analytics” software? Do you have to ask an engineer to help you whenever you have a question about the data?This is not ideal and won’t help you do your job efficiently.SQL, short for “structured query language”, is a language that can be used for all kinds of databases - from the tiny databases stored in your iPhone, to large big data databases that span multiple continents.Engineers have done a great job of creating these different types of complex data stores, while still allowing you to use the same language, more or less, for all of them.What does that mean for you?It means as long as you know SQL, you can take advantage of ALL of this software, and gain insights into this data, no matter what kind of database it is stored in, as long as it supports SQL.You can ask questions like:
  • How many people are falling into each stage of the sales funnel?
  • What is my year over year profit?
  • Are there any differences in the demographics between the people who are buying product X and product Y?
  • What is our most profitable month?
  • What are the seasonal trends in our industry?
I’m an engineer, so I probably haven’t even thought of all the questions you’ve already had for years! But I guarantee you, knowing SQL will help you answer these questions.On various teams I’ve worked on in the tech world - I’ve noticed that marketing people and product managers have SQL skills and sometimes even coding skills! So if you are looking to not only make your day more productive, but make yourself more marketable to employers and catch up to the other go-getters in your field - then you should most definitely learn SQL.The takeawayDo you want to know how to optimize your sales funnel using SQL, look at the seasonal trends in your industry, and run a SQL query on Hadoop? Then join me now in my new book, SQL for marketers: Dominate data analytics, data science, and big data.
Publication date: 03/17/2016
Kindle book details: Kindle Edition, 34 pages

Deep Learning in Python: Master Data Science and Machine Learning with Modern Neural Networks written in Python, Theano, and TensorFlow (Machine Learning in Python)

https://www.amazon.com/Deep-Learning-Python-Networks-TensorF...
Deep LearningDeep learning is making waves. At the time of this writing (March 2016), Google’s AlghaGo program just beat 9-dan professional Go player Lee Sedol at the game of Go, a Chinese board game.Experts in the field of Artificial Intelligence thought we were 10 years away from achieving a victory against a top professional Go player, but progress seems to have accelerated!While deep learning is a complex subject, it is not any more difficult to learn than any other machine learning algorithm. I wrote this book to introduce you to the basics of neural networks. You will get along fine with undergraduate-level math and programming skill.All the materials in this book can be downloaded and installed for free. We will use the Python programming language, along with the numerical computing library Numpy. I will also show you in the later chapters how to build a deep network using Theano and TensorFlow, which are libraries built specifically for deep learning and can accelerate computation by taking advantage of the GPU.Unlike other machine learning algorithms, deep learning is particularly powerful because it automatically learns features. That means you don’t need to spend your time trying to come up with and test “kernels” or “interaction effects” - something only statisticians love to do. Instead, we will let the neural network learn these things for us. Each layer of the neural network learns a different abstraction than the previous layers. For example, in image classification, the first layer might learn different strokes, and in the next layer put the strokes together to learn shapes, and in the next layer put the shapes together to form facial features, and in the next layer have a high level representation of faces.On top of all this, deep learning is known for winning its fair share Kaggle contests. These are machine learning contests that are open to anyone in the world who are allowed to use any machine learning technique they want. Deep learning is that powerful.Do you want a gentle introduction to this “dark art”, with practical code examples that you can try right away and apply to your own data? Then this book is for you.Who is this book NOT for?Deep Learning and Neural Networks are usually taught at the upper-year undergraduate level. That should give you some idea of the type of knowledge you need to understand this kind of material.You absolutely need exposure to calculus to understand deep learning, no matter how simple the instructor makes things. Linear algebra would help. I will assume familiarity with Python (although it is an easy language to pick up). You will need to have some concept of machine learning. If you know about algorithms like logistic regression already, this book is perfect for you. If not, you might want to check out my “prerequisites” book, at: http://amzn.com/B01D7GDRQ2On the other hand, this book is more like a casual primer than a dry textbook. If you are looking for material on more advanced topics, like LSTMs, convolutional neural networks, or reinforcement learning, I have online courses that teach this material, for example: https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow New libraries like TensorFlow are being updated constantly. This is not an encyclopedia for these libraries (as such a thing would be impossible to keep up to date). In the one (1!!!) month since the book was first published, no less than THREE new wrapper libraries for TensorFlow have been released to make coding deep networks easier. To try and incorporate every little update would not only be impossible, but would continually cause parts of the book to be obsolete. Nobody wants that. This book, rather, includes fundamentals. Understanding these building blocks will make tackling these new libraries and features a piece of cake - that is my goal.
Publication date: 03/11/2016
Kindle book details: Kindle Edition, 50 pages

Deep Learning: Recurrent Neural Networks in Python: LSTM, GRU, and more RNN machine learning architectures in Python and Theano (Machine Learning in Python)

https://www.amazon.com/Deep-Learning-Recurrent-Networks-arch...
LSTM, GRU, and more advanced recurrent neural networksLike Markov models, Recurrent Neural Networks are all about learning sequences - but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not - and as a result, they are more expressive, and more powerful than anything we’ve seen on tasks that we haven’t made progress on in decades.In the first section of the course we are going to add the concept of time to our neural networks.I’ll introduce you to the Simple Recurrent Unit, also known as the Elman unit.We are going to revisit the XOR problem, but we’re going to extend it so that it becomes the parity problem - you’ll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence.In the next section of the book, we are going to revisit one of the most popular applications of recurrent neural networks - language modeling.One popular application of neural networks for language is word vectors or word embeddings. The most common technique for this is called Word2Vec, but I’ll show you how recurrent neural networks can also be used for creating word vectors.In the section after, we’ll look at the very popular LSTM, or long short-term memory unit, and the more modern and efficient GRU, or gated recurrent unit, which has been proven to yield comparable performance.We’ll apply these to some more practical problems, such as learning a language model from Wikipedia data and visualizing the word embeddings we get as a result.All of the materials required for this course can be downloaded and installed for FREE. We will do most of our work in Numpy, Matplotlib, and Theano. I am always available to answer your questions and help you along your data science journey.See you in class!“Hold up... what’s deep learning and all this other crazy stuff you’re talking about?”If you are completely new to deep learning, you might want to check out my earlier books and courses on the subject:Deep Learning in Python https://www.amazon.com/dp/B01CVJ19E8Deep Learning in Python Prerequisities https://www.amazon.com/dp/B01D7GDRQ2Much like how IBM’s Deep Blue beat world champion chess player Garry Kasparov in 1996, Google’s AlphaGo recently made headlines when it beat world champion Lee Sedol in March 2016.What was amazing about this win was that experts in the field didn’t think it would happen for another 10 years. The search space of Go is much larger than that of chess, meaning that existing techniques for playing games with artificial intelligence were infeasible. Deep learning was the technique that enabled AlphaGo to correctly predict the outcome of its moves and defeat the world champion.Deep learning progress has accelerated in recent years due to more processing power (see: Tensor Processing Unit or TPU), larger datasets, and new algorithms like the ones discussed in this book.
Publication date: 08/08/2016
Kindle book details: Kindle Edition, 56 pages

Deep Learning: Natural Language Processing in Python with Recursive Neural Networks: Recursive Neural (Tensor) Networks in Theano (Deep Learning and Natural Language Processing Book 3)

https://www.amazon.com/Deep-Learning-Language-Processing-Rec...
The first 2 books in this series focused on word embeddings using 2 novel techniques: Word2Vec and GLoVe.In this book, we return to a classic NLP problem: sentiment analysis. Classification performance on the sentiment analysis task had plateaued for many years, due to not being able to handle negation, which is essentially because existing models failed to account for the structure of language.The bag-of-words vectors for “I love this movie”, “I don’t love this movie”, and “Don’t you love this movie” are very similar.In this book, we return to the fundamentals of language - the parse tree - and structure our neural networks to mirror the tree.It makes sense that a neural network created to classify language would have the same structure as language.These neural networks are called “recursive neural networks” and I will show you how they work both mathematically and with a full implementation in Theano.A naive solution to recursive neural networks would be to use recursion to implement them. This is however, a very poor solution because both Theano and TensorFlow require you to compile a graph of the neural network. If every sentence is a different tree, then every sentence will require a different neural network graph, which would be very inefficient for both Theano or TensorFlow to compute. No one would blame you for attempting this solution first. In fact, I will demonstrate why it’s bad by having you run code that implements it.Once you understand why recursion is not ideal for recursive neural networks, I will show you a “trick” that will help you implement them more efficiently. We will then run the recursive neural net on our sentiment analysis data and achieve state-of-the-art performance.Finally, we discuss a modification to the vanilla recursive neural network called the recursive neural tensor network or RNTN. It was invented by the guys at Stanford, who have created and published many NLP tools throughout the years that are now considered standard.I will show you how the model is structured mathematically and then I will show you how to implement it in Theano. You’ll see that it’s just a simple modification to our recursive neural network.Amazingly, all the technologies we discuss in this book can be downloaded and installed for FREE. That means all you need to invest after purchasing this book is your effort and your time. The only prerequisites are that you are comfortable with Python , Numpy, and Theano coding and you know the basics of deep learning.“Hold up... what’s deep learning and all this other crazy stuff you’re talking about?”If you are completely new to deep learning, you might want to check out my earlier books and courses on the subject, since they are required in order to understand this book. Just go to my profile and look for "Deep Learning in Python", and "Deep Learning in Python Prerequisities".Much like how IBM’s Deep Blue beat world champion chess player Garry Kasparov in 1996, Google’s AlphaGo recently made headlines when it beat world champion Lee Sedol in March 2016.What was amazing about this win was that experts in the field didn’t think it would happen for another 10 years. The search space of Go is much larger than that of chess, meaning that existing techniques for playing games with artificial intelligence were infeasible. Deep learning was the technique that enabled AlphaGo to correctly predict the outcome of its moves and defeat the world champion.Deep learning progress has accelerated in recent years due to more processing power (see: Tensor Processing Unit or TPU), larger datasets, and new algorithms like the ones discussed in this book.
Publication date: 08/20/2016
Kindle book details: Kindle Edition, 50 pages
[1] 2Next
PDFfetch