recentpopularlog in

ethack : machine-learning   16

Practical Deep Learning For Coders—18 hours of lessons for free
Welcome to's 7 week course, Practical Deep Learning For Coders, Part 1, taught by Jeremy Howard (Kaggle's #1 competitor 2 years running, and founder of Enlitic). Learn how to build state of the art models without needing graduate-level math—but also without dumbing anything down. Oh and one other thing... it's totally free!
machine-learning  artificial-intelligence  computer  science  course 
july 2017 by ethack
Home Page of Geoffrey Hinton
Basic papers and papers on deep learning without much math
machine-learning  computer  science 
july 2017 by ethack
Classic papers
Scholarly research is often about the latest findings - the newest knowledge that our colleagues have gleaned from nature. Some articles buck this pattern and have impact long after their publication. Today, we are releasing Classic Papers, a collection of highly-cited papers in their area of research that have stood the test of time. For each area, we list the ten most-cited articles that were published ten years earlier. This release of classic papers consists of articles that were published in 2006 and is based on our index as it was in May 2017. To browse classic papers, select one of the broad areas and then select the specific research field of your interest. For example, Agronomy & Crop Science, Oil, Petroleum & Natural Gas, and African Studies & History. The list of classic papers includes articles that presented new research. It specifically excludes review articles, introductory articles, editorials, guidelines, commentaries, etc. It also excludes articles with fewer than 20 citations and, for now, is limited to articles written in English.
curated  math  computer  science  machine-learning 
june 2017 by ethack
terryum/awesome-deep-learning-papers: The most cited deep learning papers
A curated list of the most cited deep learning papers (since 2012)

We believe that there exist classic deep learning papers which are worth reading regardless of their application domain. Rather than providing overwhelming amount of papers, We would like to provide a curated list of the awesome deep learning papers which are considered as must-reads in certain researh domains.
machine-learning  artificial-intelligence  curated 
february 2017 by ethack
Deep Reinforcement Learning: Pong from Pixels
This is a long overdue blog post on Reinforcement Learning (RL). RL is hot! You may have noticed that computers can now automatically learn to play ATARI games (from raw game pixels!), they are beating world champions at Go, simulated quadrupeds are learning to run and leap, and robots are learning how to perform complex manipulation tasks that defy explicit programming. It turns out that all of these advances fall under the umbrella of RL research. I also became interested in RL myself over the last ~year: I worked through Richard Sutton’s book, read through David Silver’s course, watched John Schulmann’s lectures, wrote an RL library in Javascript, over the summer interned at DeepMind working in the DeepRL group, and most recently pitched in a little with the design/development of OpenAI Gym, a new RL benchmarking toolkit. So I’ve certainly been on this funwagon for at least a year but until now I haven’t gotten around to writing up a short post on why RL is a big deal, what it’s abo
july 2016 by ethack
A Course in Machine Learning
CIML is a set of introductory materials that covers most major aspects of modern machine learning (supervised learning, unsupervised learning, large margin methods, probabilistic modeling, learning theory, etc.). It's focus is on broad applications with a rigorous backbone. A subset can be used for an undergraduate course; a graduate course could probably cover the entire material and then some.
july 2016 by ethack
TensorFlow Tutorial with popular machine learning algorithms implementation. This tutorial was designed for easily diving into TensorFlow, through examples.

It is suitable for beginners who want to find clear and concise examples about TensorFlow. For readability, the tutorial includes both notebook and code with explanations.
python  machine-learning  library 
june 2016 by ethack
OpenAI Gym
Open source interface to reinforcement learning tasks.

The gym open-source project provides a simple interface to a growing collection of reinforcement learning tasks. You can use it from Python, and soon from other languages.
computer  science  math  machine-learning 
may 2016 by ethack
TensorFlow -- an Open Source Software Library for Machine Intelligence
TensorFlow is an Open Source Software Library for Machine Intelligence
library  math  computer  science  machine-learning 
may 2016 by ethack
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features:

tight integration with NumPy – Use numpy.ndarray in Theano-compiled functions.
transparent use of a GPU – Perform data-intensive calculations up to 140x faster than with CPU.(float32 only)
efficient symbolic differentiation – Theano does your derivatives for function with one or many inputs.
speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny.
dynamic C code generation – Evaluate expressions faster.
extensive unit-testing and self-verification – Detect and diagnose many types of errors.
python  library  math  computer  science  machine-learning 
may 2016 by ethack
Hacker's guide to Neural Networks
My personal experience with Neural Networks is that everything became much clearer when I started ignoring full-page, dense derivations of backpropagation equations and just started writing code. Thus, this tutorial will contain very little math (I don't believe it is necessary and it can sometimes even obfuscate simple concepts). Since my background is in Computer Science and Physics, I will instead develop the topic from what I refer to as hackers's perspective. My exposition will center around code and physical intuitions instead of mathematical derivations. Basically, I will strive to present the algorithms in a way that I wish I had come across when I was starting out.

"...everything became much clearer when I started writing code."

You might be eager to jump right in and learn about Neural Networks, backpropagation, how they can be applied to datasets in practice, etc. But before we get there, I'd like us to first forget about all that. Let's take a step back and understand what is really going on at the core. Lets first talk about real-valued circuits.
computer  science  machine-learning 
february 2015 by ethack

Copy this bookmark:

to read