So, have a look at it, and don’t forget to comment below if you like it.
1. Deep Learning
Deep Learning is one of the top papers written on Deep Learning, it is written by Yann L., Yoshua B., and Geoffrey H. It facilitates computational models that are embedded with multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have remarkably improved the state-of-the-art in speech recognition, recognition of visual objects, object detection, and many other domains such as genomics and drug discovery.
2. TensorFlow: a system for a large-scale machine learning
TensorFlow: a system for large-scale machine learning is an important paper written by Martin A., Paul B., Jainmin C., Zhifeng C., and Andy D. TensorFlow avails a variety of applications, with a focus on training and inference on deep neural networks. Some of the Google services make use of TensorFlow in the production department, it has been released as an open-source project, and it has become widely used for research in machine learning.
3. Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks is written by Matt Zeiler and Rob Fergus, it focuses on the fact that the system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for the models of the deep neural network, and it has been used to facilitate research and to deploy machine learning systems into production across more than a dozen areas of computer science and other fields, including computer vision, speech recognition, robotics, natural language processing, geographic information extraction, and information retrieval.
4. Human-level control through deep reinforcement learning
Human-level control through deep reinforcement learning by Volodymyr M., Koray K., David S., Andrei A.R., Joel V is a very optimized paper that focuses on how to use recent advances in training deep neural networks for developing a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs by using the end-to-end augmentation learning method.
5. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning is written by Christian S., Sergey I., Vincent V., and Alexander AA, it insights that the very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. With an ensemble of three residual and one Inception-v4, it achieved a 3.08% top-5 error on the test set of the ImageNet classification challenge.
6. Deep learning in neural networks
Deep learning in neural networks is written by Juergen Schmidhuber, it a kind of survey compactly summarizing relevant work in which much of it is from the previous millennium, shallow and deep learners are distinguished by the depth of their credit assignment paths which are chains of possibly learnable and causal links between actions and effects. It reviews deep supervised learning, unsupervised learning, indirect search for short programs encoding deep and large networks, and reinforcement learning & evolutionary computation.