當前位置:
首頁 > 新聞 > 資源|生成對抗網路新進展與論文全集

資源|生成對抗網路新進展與論文全集

選自GitHub

參與:蔣思源、吳攀


生成對抗網路(GAN)是近段時間以來最受研究者關注的機器學習方法之一,深度學習泰斗 Yann LeCun 就曾多次談到 這種機器學習理念的巨大價值和未來前景。在本文中,機器之心總結了 GitHub 上兩篇關於 GAN 的資源,其中一篇介紹了 GAN 的一些引人關注的新理論和實踐(如 Wasserstein GAN),另一篇則集中展示了大量 GAN 相關的論文。

以下是兩篇原文的鏈接:

  • GAN 理論&實踐的新進展:https://casmls.github.io/general/2017/04/13/gan.html

  • GAN 論文列表項目:https://github.com/nightrome/really-awesome-gan

GAN 理論&實踐的新進展

首先我們看看 Liping Liu 在 github.io 上發布的這篇介紹了 GAN 理論和實踐上的新進展的文章。這篇文章對兩篇 GAN 相關的論文進行了探討;其中第一篇是 Arora et al. 的《Generalization and Equilibrium in Generative Adversarial Nets》,該論文是一篇對 GAN 的理論研究;第二篇則是 Gulrajani et al. 的《Improved Training of Wasserstein GANs》,其介紹了一種用於 Facebook 最近提出並引起了廣泛關注的 Wasserstein GAN 的新訓練方法。下面的視頻對第一篇論文給出了很好的介紹:

GAN 和 Wasserstein GAN

GAN 訓練是一個兩方博弈過程,其中生成器(generator)的目標是最小化其生成的分布和數據分布之間的差異,而判別器(discriminator)的工作則是儘力區分生成器分布的樣本和真實數據分布的樣本。當判別器的表現不比隨機亂猜更好時,我們就認為生成器「獲勝」了。

基本 GAN 的優化問題是一個「最小-最大問題(min-max problem)」:

資源|生成對抗網路新進展與論文全集

簡單解釋一下,最好的判別器會給出生成器分布 G(h),h~pNormal 與數據分布 pdata 之間的差的度量。如果我們有 pdata(x) 且該判別器可以是任意函數,那麼該生成器的優化目標就是最小化 pdata 和 G(h) 之間的 Jensen-Shannon 散度。

在實際中,人們已經在使用 Wasserstein 距離來度量兩個分布之間的差異了。可參閱以下文章:

  • Robin Winstanley 的《Modified GANs》:https://casmls.github.io/general/2017/02/23/modified-gans.html

數據分布和生成分布之間的 Wasserstein 距離是:

資源|生成對抗網路新進展與論文全集

其中 L1 表示 1-Lipschitz 函數集;f 是指判別器,其採用了神經網路的形式,並通過 GAN 訓練來學習。其目標是最小化這兩個分布之間的 Wasserstein 距離。

第一篇論文解決了以下問題:

1. 差異的度量定義在分布之上,但是當目標是使用有限樣本計算的時候,我們能得到什麼結果?

2. 訓練能否達到均衡?

3.「達到均衡」究竟是什麼意思?

第二篇論文研究了這個懲罰優化器(penalizing the optimizer)的問題——以便其在 1-Lipschitz 空間中近似找到最優的判別器。

論文 1:GAN 中的泛化與均衡(Generalization and Equilibrium in Generative Adversarial Nets)

資源|生成對抗網路新進展與論文全集

論文地址:https://arxiv.org/abs/1703.00573

距離度量的泛化

Arora et al 引進了一種新的距離度量:神經網路散度(neural network divergence)。該距離度量定義在由神經網路生成的分布上。

資源|生成對抗網路新進展與論文全集

定理:當樣本空間足夠大,兩個分布之間的距離可以由各自樣本間的距離逼近。

均衡

直觀解釋:一個強大的生成器總是能贏得對抗,因為它能使用無限的混合分量逼近數據的分布。弱一些的生成器使用有限但又足夠多的混合分量也能近似逼近贏得博弈。

設置博弈:u 和 v 代表著生成器和判別器的純策略。博弈的收益函數 F(u,v) 就是該 GAN 的目標函數:

資源|生成對抗網路新進展與論文全集

根據馮·諾依曼的一個定理,混合策略總是能實現均衡,但在該理想情況下,生成器和判別器都需要考慮無限的純策略。本論文提出有限純策略的*?*-近似均衡。

資源|生成對抗網路新進展與論文全集

定理:給定生成器和判別器足夠多的混合分量,生成器能近似贏得博弈。

MIX+GAN:生成器與判別器的混合

通過理論分析,該論文建議使用生成器和判別器的混合。模型的目標是最小化生成器 T 和其混合權重,判別器 T 和其混合權重。在這裡 w=softmax(α)。

資源|生成對抗網路新進展與論文全集

該論文使用 DCGAN 作為基本模型,並展示了 MIX+DCGAN 能生成更逼真的圖像,且比 DCGAN 有更高的起始分。參閱論文《Unsupervised representation learning with deep convolutional generative adversarial networks》by Radford, Alec, et al. https://arxiv.org/abs/1511.06434

資源|生成對抗網路新進展與論文全集

圖 4:MIX+DCGAN 和 DCGAN 的訓練曲線

論文 2:使用梯度懲罰的 Wasserstein GAN 訓練(Wasserstein GAN training with gradient penalty)

資源|生成對抗網路新進展與論文全集

這篇論文基於一個好結果——即最優的判別器(在該論文中被稱為 critic)在幾乎任何位置都有 norm 1 的梯度。這裡的梯度是關於 x 的,而非該判別器的參數。

由於以下原因,梯度裁剪(gradient clipping)效果並不是很好:

1. 這個帶有梯度裁剪的優化器會在一個比 1-Lipschitz 小的空間中搜索該判別器,所以其會使該判別器偏向更簡單的函數。

2. 被裁剪後的梯度在反向傳播通過網路層的過程中會消失或爆炸。

梯度的理論結果和梯度裁剪的缺點激發了新方法「梯度懲罰」的提出。如果梯度的標準不是一個,判別器將得到懲罰。目標函數為:

資源|生成對抗網路新進展與論文全集

x_hat 是在直線 x 和 x_bar 之間的隨機點。

在實驗中,使用梯度懲罰的 GAN 訓練要比使用權重裁剪的擁有更快的收斂速度。在圖像生成和語言建模任務中,使用該論文提出的方法訓練模型常常要比其他模型擁有更好的結果。

資源|生成對抗網路新進展與論文全集

在了解生成對抗網路的最新進展之後,下面我們列出了 GitHub 用戶 Holger Caesar 整理的 GAN 資源。

研討會

  • NIPS 2016 對抗性訓練研討會 [https://sites.google.com/site/nips2016adversarial/] [http://www.inference.vc/my-summary-of-adversarial-training-nips-workshop/]

教程和技術博客

  • How to Train a GAN? Tips and tricks to make GANs work [https://github.com/soumith/ganhacks]

  • NIPS 2016 Tutorial: Generative Adversarial Networks [https://arxiv.org/abs/1701.00160]

  • On the intuition behind deep learning & GANs—towards a fundamental understanding [https://blog.waya.ai/introduction-to-gans-a-boxing-match-b-w-neural-nets-b4e5319cc935]

  • OpenAI - Generative Models [https://blog.openai.com/generative-models/]

  • SimGANs - a game changer in unsupervised learning, self driving cars, and more [https://blog.waya.ai/simgans-applied-to-autonomous-driving-5a8c6676e36b]

論文

理論和機器學習

  • A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models [https://arxiv.org/abs/1611.03852]

  • A General Retraining Framework for Scalable Adversarial Classification [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_2.pdf]

  • Adversarial Autoencoders [https://arxiv.org/abs/1511.05644]

  • Adversarial Discriminative Domain Adaptation [https://arxiv.org/abs/1702.05464]

  • Adversarial Generator-Encoder Networks [https://arxiv.org/pdf/1704.02304.pdf]

  • Adversarial Feature Learning [https://arxiv.org/abs/1605.09782]

  • Adversarially Learned Inference [https://arxiv.org/abs/1606.00704]

  • An Adversarial Regularisation for Semi-Supervised Training of Structured Output Neural Networks [https://arxiv.org/abs/1702.02382]

  • Associative Adversarial Networks [https://arxiv.org/abs/1611.06953]

  • b-GAN: New Framework of Generative Adversarial Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_4.pdf]

  • Boundary-Seeking Generative Adversarial Networks [https://arxiv.org/abs/1702.08431]

  • Conditional Generative Adversarial Nets [https://arxiv.org/abs/1411.1784]

  • Connecting Generative Adversarial Networks and Actor-Critic Methods [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_1.pdf]

  • Cooperative Training of Descriptor and Generator Networks [https://arxiv.org/abs/1609.09408]

  • Explaining and Harnessing Adversarial Examples [https://arxiv.org/abs/1412.6572]

  • f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization [https://arxiv.org/abs/1606.00709]

  • Generating images with recurrent adversarial networks [https://arxiv.org/abs/1602.05110]

  • Generative Adversarial Nets with Labeled Data by Activation Maximization [https://arxiv.org/abs/1703.02000]

  • Generative Adversarial Networks [https://arxiv.org/abs/1406.2661] [https://github.com/goodfeli/adversarial]

  • Generative Adversarial Residual Pairwise Networks for One Shot Learning [https://arxiv.org/abs/1703.08033]

  • Generative Adversarial Structured Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_14.pdf]

  • Generative Moment Matching Networks [https://arxiv.org/abs/1502.02761] [https://github.com/yujiali/gmmn]

  • Improved Techniques for Training GANs [https://arxiv.org/abs/1606.03498] [https://github.com/openai/improved-gan]

  • Inverting The Generator Of A Generative Adversarial Network [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_9.pdf]

  • Learning in Implicit Generative Models [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_10.pdf]

  • Learning to Discover Cross-Domain Relations with Generative Adversarial Networks [https://arxiv.org/abs/1703.05192]

  • Least Squares Generative Adversarial Networks [https://arxiv.org/abs/1611.04076]

  • Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities [https://arxiv.org/abs/1701.06264]

  • LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation [https://arxiv.org/abs/1703.01560]

  • Maximum-Likelihood Augmented Discrete Generative Adversarial Networks [https://arxiv.org/abs/1702.07983]

  • Mode Regularized Generative Adversarial Networks [https://arxiv.org/abs/1612.02136]

  • On the Quantitative Analysis of Decoder-Based Generative Models [https://arxiv.org/abs/1611.04273]

  • SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient [https://arxiv.org/abs/1609.05473]

  • Simple Black-Box Adversarial Perturbations for Deep Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_11.pdf]

  • Stacked Generative Adversarial Networks [https://arxiv.org/abs/1612.04357]

  • Training generative neural networks via Maximum Mean Discrepancy optimization [https://arxiv.org/abs/1505.03906]

  • Triple Generative Adversarial Nets [https://arxiv.org/abs/1703.02291]

  • Unrolled Generative Adversarial Networks [https://arxiv.org/abs/1611.02163]

  • Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [https://arxiv.org/abs/1511.06434] [https://github.com/Newmu/dcgan_code] [https://github.com/pytorch/examples/tree/master/dcgan][https://github.com/carpedm20/DCGAN-tensorflow] [https://github.com/soumith/dcgan.torch] [https://github.com/jacobgil/keras-dcgan]

  • Wasserstein GAN [https://arxiv.org/abs/1701.07875] [https://github.com/martinarjovsky/WassersteinGAN]

視覺應用

  • Adversarial Networks for the Detection of Aggressive Prostate Cancer [https://arxiv.org/abs/1702.08014]

  • Age Progression / Regression by Conditional Adversarial Autoencoder [https://arxiv.org/abs/1702.08423]

  • ArtGAN: Artwork Synthesis with Conditional Categorial GANs [https://arxiv.org/abs/1702.03410]

  • Conditional generative adversarial nets for convolutional face generation [http://www.foldl.me/uploads/2015/conditional-gans-face-generation/paper.pdf]

  • Conditional Image Synthesis with Auxiliary Classifier GANs [https://arxiv.org/abs/1610.09585]

  • Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks [https://arxiv.org/abs/1506.05751] [https://github.com/facebook/eyescream] [http://soumith.ch/eyescream/]

  • Deep multi-scale video prediction beyond mean square error [https://arxiv.org/abs/1511.05440] [https://github.com/dyelax/Adversarial_Video_Generation]

  • Full Resolution Image Compression with Recurrent Neural Networks [https://arxiv.org/abs/1608.05148]

  • Generate To Adapt: Aligning Domains using Generative Adversarial Networks [https://arxiv.org/pdf/1704.01705.pdf]

  • Generative Adversarial Text to Image Synthesis [https://arxiv.org/abs/1605.05396] [https://github.com/paarthneekhara/text-to-image]

  • Generative Visual Manipulation on the Natural Image Manifold [http://www.eecs.berkeley.edu/~junyanz/projects/gvm/] [https://youtu.be/9c4z6YsBGQ0] [https://arxiv.org/abs/1609.03552] [https://github.com/junyanz/iGAN]

  • Image De-raining Using a Conditional Generative Adversarial Network [https://arxiv.org/abs/1701.05957]

  • Image Generation and Editing with Variational Info Generative Adversarial Networks [https://arxiv.org/abs/1701.04568]

  • Image-to-Image Translation with Conditional Adversarial Networks [https://arxiv.org/abs/1611.07004] [https://github.com/phillipi/pix2pix]

  • Imitating Driver Behavior with Generative Adversarial Networks [https://arxiv.org/abs/1701.06699]

  • Invertible Conditional GANs for image editing [https://arxiv.org/abs/1611.06355]

  • Multi-view Generative Adversarial Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_13.pdf]

  • Neural Photo Editing with Introspective Adversarial Networks [https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_15.pdf]

  • Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network [https://arxiv.org/abs/1609.04802]

  • Recurrent Topic-Transition GAN for Visual Paragraph Generation [https://arxiv.org/abs/1703.07022]

  • RenderGAN: Generating Realistic Labeled Data [https://arxiv.org/abs/1611.01331]

  • SeGAN: Segmenting and Generating the Invisible [https://arxiv.org/abs/1703.10239]

  • Semantic Segmentation using Adversarial Networks [https://arxiv.org/abs/1611.08408]

  • Semi-Latent GAN: Learning to generate and modify facial images from attributes [https://arxiv.org/pdf/1704.02166.pdf]

  • TAC-GAN - Text Conditioned Auxiliary Classifier Generative Adversarial Network [https://arxiv.org/abs/1703.06412]

  • Towards Diverse and Natural Image Descriptions via a Conditional GAN [https://arxiv.org/abs/1703.06029]

  • Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro [https://arxiv.org/abs/1701.07717]

  • Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks [https://arxiv.org/abs/1703.10593]

  • Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery [https://arxiv.org/abs/1703.05921]

  • Unsupervised Cross-Domain Image Generation [https://arxiv.org/abs/1611.02200]

  • WaterGAN: Unsupervised Generative Network to Enable Real-time Color Correction of Monocular Underwater Images [https://arxiv.org/abs/1702.07392]

其它應用

  • Adversarial Training Methods for Semi-Supervised Text Classification [https://arxiv.org/abs/1605.07725]

  • Learning to Protect Communications with Adversarial Neural Cryptography [https://arxiv.org/abs/1610.06918] [https://blog.acolyer.org/2017/02/10/learning-to-protect-communications-with-adversarial-neural-cryptography/]

  • MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation using 1D and 2D Conditions [https://arxiv.org/abs/1703.10847]

  • Semi-supervised Learning of Compact Document Representations with Deep Networks [http://www.cs.nyu.edu/~ranzato/publications/ranzato-icml08.pdf]

  • Steganographic Generative Adversarial Networks [https://arxiv.org/abs/1703.05502]

視頻

  • Generative Adversarial Networks by Ian Goodfellow [https://channel9.msdn.com/Events/Neural-Information-Processing-Systems-Conference/Neural-Information-Processing-Systems-Conference-NIPS-2016/Generative-Adversarial-Networks]

  • Tutorial on Generative Adversarial Networks by Mark Chang [https://www.youtube.com/playlist?list=PLeeHDpwX2Kj5Ugx6c9EfDLDojuQxnmxmU]

代碼

  • Cleverhans: A library for benchmarking vulnerability to adversarial examples [https://github.com/openai/cleverhans] [http://cleverhans.io/]

  • Generative Adversarial Networks (GANs) in 50 lines of code (PyTorch) [https://medium.com/@devnag/generative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f] [https://github.com/devnag/pytorch-generative-adversarial-networks]

喜歡這篇文章嗎?立刻分享出去讓更多人知道吧!

本站內容充實豐富,博大精深,小編精選每日熱門資訊,隨時更新,點擊「搶先收到最新資訊」瀏覽吧!


請您繼續閱讀更多來自 機器之心 的精彩文章:

TensorFlow中的命令式編程
Kaggle 上機器學習的八個步驟
新論文提出深度API編程器:可以學習使用API編程
DeepMind的強化學習:從無監督輔助到情境控制
Adobe圖像處理論文:開源iGAN到深度摳圖和風格轉換

TAG:機器之心 |

您可能感興趣

資源|生成對抗網路及其變體的論文匯總
整合資源 促進國際基建融合發展
定軍山文娛—集成行業資源提升中國電影產業生產力
「浜北」的大寧劇院對接國際資源、文化反哺成功「逆襲」
《文明衝突》建築總覽下——資源類建築
文本直送科技新聞:多元硬體結合影音出版,Sony 品川總部大廳展現集團資源整合
西域民族服飾設計與工藝文化傳承與創新資源庫呼之欲出
雲定製助力旅業網第五屆全球資源商對接大會
「科技之光」點亮張江之路集聚全球高端創新資源
吳建民:外交早應轉化成經濟發展的資源
統籌創新空間 引領創新資源集聚
跨越能源貫通資源——中國與上合組織成員互聯升級
國內外深海資源探測技術應用綜述及展望
推動文化資源全民共享
資源控制技巧大全
鴻山世界物聯網大賽 上海凌佩整合資源走科創之路
YY也要進軍短視頻 旗下資源將成競爭資本
人力資源技術展 達幫網開啟共享經濟HRO新模式
亨達科技如何建設安全的大數據資源體系