recent ‧ popular ‧log in

papers

« earlier   
[1703.03208] Compressed Sensing using Generative Models
"The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G:ℝk→ℝn. Our main theorem is that, if G is L-Lipschitz, then roughly O(klogL) random Gaussian measurements suffice for an ℓ2/ℓ2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy."
papers  compressed-sensing  generative 
3 days ago by arsyed
[1705.07215] How to Train Your DRAGAN
"Generative Adversarial Networks have emerged as an effective technique for estimating data distributions. The basic setup consists of two deep networks playing against each other in a zero-sum game setting. However, it is not understood if the networks reach an equilibrium eventually and what dynamics makes this possible. The current GAN training procedure, which involves simultaneous gradient descent, lacks a clear game-theoretic justification in the literature. In this paper, we introduce regret minimization as a technique to reach equilibrium in games and use this to justify the success of simultaneous GD in GANs. In addition, we present a hypothesis that mode collapse, which is a common occurrence in GAN training, happens due to the existence of spurious local equilibria in non-convex games. Motivated by these insights, we develop an algorithm called DRAGAN that is fast, simple to implement and achieves competitive performance in a stable fashion across different architectures (150 random setups), datasets (MNIST, CIFAR-10, and CelebA), and divergence measures with almost no hyperparameter tuning. We show significant improvements over the recently proposed Wasserstein GAN variants."
papers  gan  game-theory  adversarial-learning 
3 days ago by arsyed
[1706.05137] One Model To Learn Them All
"Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all."
papers  neural-net  composition  multi-domain 
3 days ago by arsyed
papers-we-love/papers-we-love: Papers from the computer science community to read and discuss.
papers-we-love - Papers from the computer science community to read and discuss.
compsci  paper  papers 
4 days ago by djhworld
Best paper awards at AAAI, ACL, CHI, CIKM, CVPR, FOCS, FSE, ICCV, ICML, ICSE, IJCAI, INFOCOM, KDD, MOBICOM, NSDI, OSDI, PLDI, PODS, S&P, SIGCOMM, SIGIR, SIGMETRICS, SIGMOD, SODA, SOSP, STOC, UIST, VLDB, WWW
Best paper awards (since 1996) for top-tier computer science conferences: AAAI, ACL, CHI, CIKM, CVPR, FOCS, FSE, ICCV, ICML, ICSE, IJCAI, INFOCOM, KDD, MOBICOM, NSDI, OSDI, PLDI, PODS, S&P, SIGCOMM, SIGIR, SIGMETRICS, SIGMOD, SODA, SOSP, STOC, UIST, VLDB, WWW.
compsci  computerscience  papers 
4 days ago by djhworld

Copy this bookmark:





to read