Interesting articles on Machine Learning
An (incomplete) list of articles on Machine Learning that I read, mainly used as a reference I can access when I’m not at home
On Deep Learning
Deep Learning is the current revolution ongoing in the field of machine learning. Everything from self-driving cars, speech recognition and playing Go can be accomplished using Deep Learning. There is a lot of research going on in HEP, howto take advantage of Deep Learning in our analysis.
- 24.06.2012 Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. „Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives“.
- 03.07.2012 G. E. Hinton „Improving neural networks by preventing co-adaptation of feature detectors“
- 2013 L. Wan et. al. „Regularization of Neural Networks using DropConnect“
- 18.02.2013 I. J. Goodfellow „Maxout Networks“
- 19.02.2014 Pierre Baldi, Peter Sadowski, and Daniel Whiteson. „Searching for Exotic Particles in High-Energy Physics with Deep Learning“
- 11.02.2015 S. Ioffe, C. Szegedy „Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift“
- 26.02.2015 V. Mnih, et. al. „Human-level control through deep reinforcement learning“
- 28.05.2015 Yann Lecun, Yoshua Bengio, and Geoffrey Hinton. „Deep learning“.
- 05.06.2017 A. Santoro, et. al „A simple neural network module for relational reasoning“
- 24.06.2016 H. Cheng, et. al. „Wide & Deep Learning for Recommender Systems“
- 29.08.2016 Henry W. Lin, Max Tegmark, and David Rolnick. Why does deep and cheap learning work so well?
- 28.10.2017 A. v.d. Oord, et. al „Parallel WaveNet: Fast High-Fidelity Speech Synthesis“
- 25.10.2017 D. Levy, et. al. „Generalizing Hamiltonian Monte Carlo with Neural Networks“
- 13.12.2017 R. Vidal, et. al. „Mathematics of Deep Learning“
- 15.02.2018 A. S. Morcos et. al. „On the importance of single directions for generalization“
- 22.02.2018 N. Carlini, et. al. „The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets“
Gradient Descent Optimization
Reinforcement Learning
- 19.12.2013 V. Minh, et. al „Playing Atari with Deep Reinforcement Learning“
- 27.01.2016 D. Silver, et. al. „Mastering the game of Go with deep neural networks and tree search“
- 04.02.2016 V. Minh et al. „Asynchronous Methods for Deep Reinforcement Learning“
- 02.12.2016 J. Kirkpatrick et. al. „Overcoming catastrophic forgetting in neural networks“
- 25.01.2017 Y. Li „Deep Reinforcement Learning: An Overview“
- 28.02.2017 O. Nachum et al. „Bridging the Gap Between Value and Policy Based Reinforcement Learning“
- 06.10.2017 M. Hessel et. al. „Rainbow: Combining Improvements in Deep Reinforcement Learning“
- 18.10.2017 [D. Silver et al. „Mastering the game of Go without Human Knowledge“] (https://deepmind.com/research/publications/mastering-game-go-without-human-knowledge/)
- 02.11.2017 M. Lanctot et. al „A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning“
- 02.01.2018 Y. Tassa, et. al. „DeepMind Control Suite“
- 10.02.2018 O. Nachum, Y. Chow, M. Ghavamzadeh „Path Consistency Learning in Tsallis Entropy Regularized MDPs“
- 15.02.2018 G. Barth-Maron et al. „Distributed Distributional Deterministic Policy Gradients“
- 21.02.2018 N. C. Rabinowitz, et. al. „Machine Theory of Mind“
- 22.02.2018 D. J. Mankowitz, et al. „Unicorn: Continual Learning with a Universal, Off-policy Agent“
- 24.02.2018 P. Chrabaszcz, I. Loshchilov, F. Hutter „Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari“
- 28.02.2018 R. Dubey, et. al. „Investigating Human Priors for Playing Video Games“
Recurrent Neural Networks
Convolutional Neural Networks
Adversarial Examples
- 20.12.2014 I. J. Goodfellow „Explaining and Harnessing Adversarial Examples“
- 14.03.2016 N. Papernot, et. al. „Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks“
- 27.08.2016 T. Tanay, L. Griffin „A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples“
- 11.02.2017 A. Kurakin, et. al. „Adversarial examples in the physical world“
- 14.02.2017 J. H. Metzen, et. al. „On Detecting Adversarial Perturbations“
- 03.03.2017 V. Fischer, et. al. „Adversarial Examples for Semantic Image Segmentation“
- 09.03.2017 S.-M. Moosavi-Dezfooli, et. al. „Universal adversarial perturbations“
- 28.03.2017 S. Baluja, I. Fischer „Adversarial Transformation Networks: Learning to Generate Adversarial Examples“
- 31.07.2017 J. H. Metzen, et. al. „Universal Adversarial Perturbations Against Semantic Image Segmentation“
- 09.11.2017 A. Madry, et. al. „Towards Deep Learning Models Resistant to Adversarial Attacks“
- 09.01.2018 J. Gilmer et. al. „Adversarial Spheres“
- 30.01.2018 F. Tramer et. al. „Ensemble Adversarial Training: Attacks and Defenses“
- 27.02.2018 G. F. Elsayed, et. al. „Adversarial Examples that Fool both Human and Computer Vision“
Adversarial Networks
Hyper Parameter Optimization
All multivariate methods have hyper-parameters, so some parameters which influence the performance of the algorithm and have to be set by the user. It is common to automatically optimize these hyper-parmaeters using different optimization algorithms. There are four different approaches: grid-search, random-search, gradient, bayesian
On Boosted Decision Trees
Boosted decision trees are the working horse of classification / regression in HEP. They have a good out-of-the-box performance, are reasonable fast, and robust
On Data Analysis Techniques
With sPlot you can train a classifier directly on data, other similar methods are: side-band substration and training data vs mc, both are described in the second paper below
FastBDT
Thomas Keck. „FastBDT: A Speed-Optimized Multivariate Classification Algorithm for the Belle II Experiment“.
TMVA
Andreas Hoecker et al. „TMVA: Toolkit for Multivariate Data Analysis“.
FANN
S. Nissen. Implementation of a Fast Artificial Neural Network Library (fann).
SKLearn
F. Pedregosa et al. „Scikit-learn: Machine Learning in Python“.
hep_ml
XGBoost
Tianqi Chen and Carlos Guestrin. „XGBoost: A Scalable Tree Boosting System“.
Tensorflow
Martin Abadi et al. „TensorFlow: A system for large-scale machine learning“
Theano
Rami Al-Rfou et al. „Theano: A Python framework for fast computation of mathematical expressions“
NeuroBayes
M. Feindt and U. Kerzel. „The NeuroBayes neural network package“
On Hardware
Others
- Joseph Conrad „The Secret Sharer“