๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ
Deep Learning/[๋…ผ๋ฌธ] Paper Review

GAN: Generative Adversarial Nets

by ์ œ๋ฃฝ 2023. 7. 6.
728x90
๋ฐ˜์‘ํ˜•

 

 

0. Abstract
  • ๋ณธ๋ฌธ

    We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1 2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.

    ์ €ํฌ๋Š” ์ƒˆ๋กœ์šด generative model์„ ์ถ”์ •ํ•˜๊ธฐ ์œ„ํ•œ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์ œ์•ˆํ•ฉ๋‹ˆ๋‹ค. ์ด ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ๋Š” ๋‘ ๊ฐœ์˜ ๋ชจ๋ธ์„ ๋™์‹œ์— ํ•™์Šต์‹œํ‚ต๋‹ˆ๋‹ค. ํ•˜๋‚˜๋Š” ๋ฐ์ดํ„ฐ ๋ถ„ํฌ๋ฅผ ์žก์•„๋‚ด๋Š” generative model G์ด๊ณ , ๋‹ค๋ฅธ ํ•˜๋‚˜๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ์˜จ ์ƒ˜ํ”Œ์ธ์ง€ G๋กœ๋ถ€ํ„ฐ ์˜จ ์ƒ˜ํ”Œ์ธ์ง€๋ฅผ ํ™•๋ฅ ์ ์œผ๋กœ ์ถ”์ •ํ•˜๋Š” discriminative model D์ž…๋‹ˆ๋‹ค. G์˜ ํ•™์Šต ๊ณผ์ •์€ D๊ฐ€ ์‹ค์ˆ˜๋ฅผ ์ €์ง€๋ฅด๋Š” ํ™•๋ฅ ์„ ์ตœ๋Œ€ํ™”ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํ”„๋ ˆ์ž„์›Œํฌ๋Š” minimax two-player game์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ž„์˜์˜ ํ•จ์ˆ˜ G์™€ D์— ๋Œ€ํ•ด์„œ, G๊ฐ€ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๋ถ„ํฌ๋ฅผ ๋ณต์›ํ•˜๊ณ  D๊ฐ€ ๋ชจ๋“  ๊ณณ์—์„œ 1/2์˜ ๊ฐ’์„ ๊ฐ€์ง€๋Š” ์œ ์ผํ•œ ํ•ด๋ฒ•์ด ์กด์žฌํ•ฉ๋‹ˆ๋‹ค. Multilayer perceptron์œผ๋กœ G์™€ D๋ฅผ ์ •์˜ํ•œ ๊ฒฝ์šฐ, ์ „์ฒด ์‹œ์Šคํ…œ์€ backpropagation์„ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Markov chain์ด๋‚˜ ํŽผ์ณ์ง„ ๊ทผ์‚ฌ ์ถ”๋ก  ๋„คํŠธ์›Œํฌ๋Š” ํ›ˆ๋ จ ๋˜๋Š” ์ƒ˜ํ”Œ ์ƒ์„ฑ ๊ณผ์ •์—์„œ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์‹คํ—˜์€ ์ƒ์„ฑ๋œ ์ƒ˜ํ”Œ์˜ ์งˆ์  ๋ฐ ์–‘์  ํ‰๊ฐ€๋ฅผ ํ†ตํ•ด ์ด ํ”„๋ ˆ์ž„์›Œํฌ์˜ ์ž ์žฌ๋ ฅ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.

  • ๊ฒฝ์Ÿํ•˜๋Š” ๊ณผ์ •์„ ํ†ตํ•ด generative model์„ ์ถ”์ •ํ•˜๋Š” ํ”„๋ ˆ์ž„ ์›Œํฌ๋ฅผ ์ œ์•ˆํ•จ
  • 2๊ฐœ์˜ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ด (G vs D)
  1. Generative model (์ƒ์„ฑ ๋ชจ๋ธ) G
    • training data์˜ ๋ถ„ํฌ๋ฅผ ๋ชจ์‚ฌํ•จ (fake data ์ƒ์„ฑ)
    • discriminative model์ด ๊ตฌ๋ณ„ํ•˜์ง€ ๋ชปํ•˜๋„๋ก ํ•จ (์ง„์งœ์™€ ๊ฐ€์งœ๋ฅผ ํŒ๋ณ„ํ•˜๋Š” ๋ชจ๋ธ)
  1. Discriminative model (ํŒ๋ณ„ ๋ชจ๋ธ) D
    • G๊ฐ€ ๋งŒ๋“  (fake data)๊ฐ€ ์•„๋‹Œ, ์‹ค์ œ training data๋กœ๋ถ€ํ„ฐ ๋‚˜์˜จ ๋ฐ์ดํ„ฐ์ผ ํ™•๋ฅ ์„ ์ถ”์ •ํ•จ

โ€ป G์˜ ํ•™์Šต ๊ณผ์ •์€ ํŒ๋ณ„๋ชจ๋ธ(D)๊ฐ€ G๋กœ๋ถ€ํ„ฐ ๋‚˜์˜จ(fake data) vs training (real data) ๋ฅผ ํŒ๋ณ„ํ•˜๋Š”๋ฐ ์‹ค์ˆ˜ํ•  ํ™•๋ฅ ์„ ์ตœ๋Œ€ํ™” ์‹œํ‚ค๋Š” ๊ฒƒ.

์ฆ‰, ๋‹ค์‹œ ๋งํ•ด G๋Š” D๊ฐ€ ์‹ค์ œ๋ฐ์ดํ„ฐ์™€ fake data๋ฅผ ์„œ๋กœ ๊ตฌ๋ณ„ํ•˜์ง€ ๋ชปํ•˜๋„๋ก ์†์ด๋Š” ๊ฒƒ์ด G์˜ ๋ชฉํ‘œ

⇒ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด๋ฅผ minimax two-layer game์ด๋ผ๊ณ  ํ‘œํ˜„ํ•จ

⇒ G, D์˜ ๊ณต๊ฐ„์—์„œ G๊ฐ€ training ๋ฐ์ดํ„ฐ ๋ถ„ํฌ๋ฅผ ๋ชจ์‚ฌํ•˜๊ฒŒ ๋˜๋ฉด์„œ, D๊ฐ€ ์‹ค์ œ training ๋ฐ์ดํ„ฐ์ธ์ง€, fake data์ธ์ง€ ํŒ๋ณ„ํ•˜๋Š” ํ™•๋ฅ ์€ 1/2๊ฐ€ ๋  ๊ฒƒ์ž„

⇒ ์ฆ‰, ๋‘ ๋ฐ์ดํ„ฐ๋ฅผ ํŒ๋ณ„ํ•˜๋Š” ๊ฒƒ์ด ์–ด๋ ค์›Œ์ง„๋‹ค๋Š” ๋ง

  • G์™€ D๊ฐ€ multi-layer perceptrons์œผ๋กœ ์ •์˜๋œ ๊ฒฝ์šฐ, ์ „์ฒด ์‹œ์Šคํ…œ์€ back-propagation์„ ํ†ตํ•ด ํ•™์Šต๋จ
1. Intro/Related work
  • ๋ณธ๋ฌธ

    The promise of deep learning is to discover rich, hierarchical models [2] that represent probability distributions over the kinds of data encountered in artificial intelligence applications, such as natural images, audio waveforms containing speech, and symbols in natural language corpora. So far, the most striking successes in deep learning have involved discriminative models, usually those that map a high-dimensional, rich sensory input to a class label [14, 22]. These striking successes have primarily been based on the backpropagation and dropout algorithms, using piecewise linear units [19, 9, 10] which have a particularly well-behaved gradient . Deep generative models have had less of an impact, due to the difficulty of approximating many intractable probabilistic computations that arise in maximum likelihood estimation and related strategies, and due to difficulty of leveraging the benefits of piecewise linear units in the generative context. We propose a new generative model estimation procedure that sidesteps these difficulties. 1

    ์‹ฌ์ธต ํ•™์Šต์˜ ์ž ์žฌ๋ ฅ์€ ์ธ๊ณต์ง€๋Šฅ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์—์„œ ๋งŒ๋‚˜๊ฒŒ ๋˜๋Š” ์ž์—ฐ ์ด๋ฏธ์ง€, ์Œ์„ฑ์„ ํฌํ•จํ•œ ์˜ค๋””์˜ค ์›จ์ด๋ธŒํผ, ์ž์—ฐ ์–ธ์–ด ๋ง๋ญ‰์น˜์˜ ๊ธฐํ˜ธ ๋“ฑ๊ณผ ๊ฐ™์€ ๋ฐ์ดํ„ฐ ์ข…๋ฅ˜์— ๋Œ€ํ•œ ํ™•๋ฅ  ๋ถ„ํฌ๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ํ’๋ถ€ํ•˜๊ณ  ๊ณ„์ธต์ ์ธ ๋ชจ๋ธ์„ ๋ฐœ๊ฒฌํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ง€๊ธˆ๊นŒ์ง€ ์‹ฌ์ธต ํ•™์Šต์˜ ๊ฐ€์žฅ ๋‘๋“œ๋Ÿฌ์ง„ ์„ฑ๊ณต์€ ์ฃผ๋กœ ๊ณ ์ฐจ์›์˜ ํ’๋ถ€ํ•œ ๊ฐ๊ฐ์  ์ž…๋ ฅ์„ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋งคํ•‘ํ•˜๋Š” ํŒ๋ณ„ ๋ชจ๋ธ์— ๊ด€ํ•œ ๊ฒƒ์ด์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋‘๋“œ๋Ÿฌ์ง„ ์„ฑ๊ณต์€ ์ฃผ๋กœ ์—ญ์ „ํŒŒ์™€ ๋“œ๋กญ์•„์›ƒ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์˜€์œผ๋ฉฐ, ํŠนํžˆ ์ž˜ ๋™์ž‘ํ•˜๋Š” ๊ธฐ์šธ๊ธฐ๋ฅผ ๊ฐ€์ง„ ์กฐ๊ฐ๋ณ„ ์„ ํ˜• ์œ ๋‹›์„ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ์‹ฌ์ธต ์ƒ์„ฑ ๋ชจ๋ธ์€ ๋‹ค์–‘ํ•œ ์•…๋ช…๋†’์€ ํ™•๋ฅ  ๊ณ„์‚ฐ์˜ ๊ทผ์‚ฌํ™”์™€ ์ตœ๋Œ€ ์šฐ๋„ ์ถ”์ • ๋ฐ ๊ด€๋ จ ์ „๋žต์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์–ด๋ ค์›€, ๊ทธ๋ฆฌ๊ณ  ์ƒ์„ฑ์ ์ธ ๋งฅ๋ฝ์—์„œ ์กฐ๊ฐ๋ณ„ ์„ ํ˜• ์œ ๋‹›์˜ ์ด์ ์„ ํ™œ์šฉํ•˜๊ธฐ ์–ด๋ ค์šด ์ ์œผ๋กœ ์ธํ•ด ๊ทธ ์˜ํ–ฅ๋ ฅ์ด ์ ์—ˆ์Šต๋‹ˆ๋‹ค. ์ €ํฌ๋Š” ์ด๋Ÿฌํ•œ ์–ด๋ ค์›€์„ ์šฐํšŒํ•˜๋Š” ์ƒˆ๋กœ์šด ์ƒ์„ฑ ๋ชจ๋ธ ์ถ”์ • ์ ˆ์ฐจ๋ฅผ ์ œ์•ˆํ•ฉ๋‹ˆ๋‹ค.

    In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a sample is from the model distribution or the data distribution. The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistiguishable from the genuine articles. ์ œ์•ˆ๋œ ์ ๋Œ€์  ์‹ ๊ฒฝ๋ง(adversarial nets) ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ, ์ƒ์„ฑ ๋ชจ๋ธ์€ ์ ๋Œ€์ ์ธ ์ƒ๋Œ€์ธ ํŒ๋ณ„ ๋ชจ๋ธ๊ณผ ๋งž๋ถ™๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ํŒ๋ณ„ ๋ชจ๋ธ์€ ์ƒ˜ํ”Œ์ด ๋ชจ๋ธ ๋ถ„ํฌ์—์„œ ๋‚˜์™”๋Š”์ง€ ์•„๋‹ˆ๋ฉด ๋ฐ์ดํ„ฐ ๋ถ„ํฌ์—์„œ ๋‚˜์™”๋Š”์ง€๋ฅผ ํŒ๋‹จํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ƒ์„ฑ ๋ชจ๋ธ์€ ๊ฐ€์งœ ํ†ตํ™”๋ฅผ ์ƒ์‚ฐํ•˜๊ณ  ๊ฐ์ง€๋˜์ง€ ์•Š๊ณ  ์‚ฌ์šฉํ•˜๋ ค๋Š” ์œ„์กฐ๊พผ ํŒ€์— ์œ ์‚ฌํ•˜๊ฒŒ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ํŒ๋ณ„ ๋ชจ๋ธ์€ ์œ„์กฐ๋œ ํ†ตํ™”๋ฅผ ๊ฐ์ง€ํ•˜๋ ค๋Š” ๊ฒฝ์ฐฐ์— ์œ ์‚ฌํ•˜๊ฒŒ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒŒ์ž„์—์„œ์˜ ๊ฒฝ์Ÿ์€ ์œ„์กฐํ’ˆ์ด ์ง„์งœ ๋ฌผ๊ฑด๊ณผ ๊ตฌ๋ณ„ํ•  ์ˆ˜ ์—†๋Š” ์ •๋„๊นŒ์ง€ ์–‘ ํŒ€์ด ๋ฐฉ๋ฒ•์„ ๊ฐœ์„ ํ•˜๋„๋ก ๋™๋ ฅ์„ ๋ถ€์—ฌํ•ฉ๋‹ˆ๋‹ค.

    This framework can yield specific training algorithms for many kinds of model and optimization algorithm. In this article, we explore the special case when the generative model generates samples by passing random noise through a multilayer perceptron, and the discriminative model is also a multilayer perceptron. We refer to this special case as adversarial nets. In this case, we can train both models using only the highly successful backpropagation and dropout algorithms [17] and sample from the generative model using only forward propagation. No approximate inference or Markov chains are necessary. ์ด ํ”„๋ ˆ์ž„์›Œํฌ๋Š” ๋งŽ์€ ์ข…๋ฅ˜์˜ ๋ชจ๋ธ๊ณผ ์ตœ์ ํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋Œ€ํ•œ ๊ตฌ์ฒด์ ์ธ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๋„์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์ƒ์„ฑ ๋ชจ๋ธ์ด ๋žœ๋ค ๋…ธ์ด์ฆˆ๋ฅผ ๋‹ค์ธต ํผ์…‰ํŠธ๋ก ์„ ํ†ต๊ณผ์‹œ์ผœ ์ƒ˜ํ”Œ์„ ์ƒ์„ฑํ•˜๊ณ , ํŒ๋ณ„ ๋ชจ๋ธ๋„ ๋‹ค์ธต ํผ์…‰ํŠธ๋ก ์ธ ํŠน์ˆ˜ํ•œ ๊ฒฝ์šฐ๋ฅผ ํƒ๊ตฌํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠน์ˆ˜ํ•œ ๊ฒฝ์šฐ๋ฅผ ์ ๋Œ€์  ์‹ ๊ฒฝ๋ง(adversarial nets)์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ, ์šฐ๋ฆฌ๋Š” ๋‘ ๋ชจ๋ธ์„ ๋งค์šฐ ์„ฑ๊ณต์ ์ธ ์—ญ์ „ํŒŒ(backpropagation)์™€ ๋“œ๋กญ์•„์›ƒ(dropout) ์•Œ๊ณ ๋ฆฌ์ฆ˜๋งŒ์„ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ƒ์„ฑ ๋ชจ๋ธ์—์„œ๋Š” ๋‹จ์ˆœํžˆ ์ˆœ๋ฐฉํ–ฅ ์ „ํŒŒ๋งŒ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ˜ํ”Œ์„ ์ถ”์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทผ์‚ฌ ์ถ”๋ก (approximate inference)์ด๋‚˜ ๋งˆ๋ฅด์ฝ”ํ”„ ์ฒด์ธ(Markov chains)์€ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

  • adversarial nets ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ generator ๋ชจ๋ธ์€ discriminator ๋ชจ๋ธ์„ ์†์ด๋„๋ก ์„ธํŒ…๋˜๊ณ  discriminator ๋ชจ๋ธ์€ ์ƒ˜ํ”Œ์ด generator ๋ชจ๋ธ G๊ฐ€ ๋ชจ๋ธ๋งํ•œ ๋ถ„ํฌ์—์„œ ๋‚˜์˜จ ๊ฒƒ์ธ์ง€ ์‹ค์ œ ๋ฐ์ดํ„ฐ ๋ถ„ํฌ์—์„œ ๋‚˜์˜จ๊ฒƒ์ธ์ง€ ๊ฒฐ์ •ํ•˜๋Š” ๋ฒ•์„ ํ•™์Šต์‹œํ‚ด
  • ์ด๋Ÿฌํ•œ ๊ฒฝ์Ÿ๊ตฌ๋„๋Š” ๋‘ ๋ชจ๋ธ์ด ๊ฐ๊ฐ์˜ ๋ชฉ์ ์„ ๋‹ฌ์„ฑ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์Šค์Šค๋กœ๋ฅผ ๊ฐœ์„ ํ•˜๋„๋ก ํ•จ
  • ex) ์œ„์กฐ์ง€ํ๋ฒ”(G)๋Š” ๊ฒฝ์ฐฐ(D)์„ ์†์ด๊ธฐ ์œ„ํ•ด ์œ„์กฐ์ง€ํ๋ฅผ ๋งŒ๋“ค๊ณ  ๊ฒฝ์ฐฐ์€ ์ด๊ฒƒ์„ ์ง„์งœ ์ง€ํ๋Š” 1, ๊ฐ€์งœ ์ง€ํ๋Š” 0์œผ๋กœ ํŒ๋‹จํ•˜์—ฌ ๊ตฌ๋ถ„ํ•จ.
  • ์œ„์กฐ์ง€ํ๋ฒ”๋„ ํ›ˆ๋ จ์„ ํ• ์ˆ˜๋ก ์œ„์กฐ ๋Šฅ๋ ฅ์ด ๋†’์•„์ง€๊ณ  ๊ฒฝ์ฐฐ๋„ ํ›ˆ๋ จ์„ ํ• ์ˆ˜๋ก ์œ„์กฐ์ง€ํ์˜ ๊ฐ๋ณ„๋Šฅ๋ ฅ์ด ๋†’์•„์ง€๊ฒŒ ๋จ → ๋ชจ๋‘์˜ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ด
  • G ๋ชจ๋ธ๊ณผ D ๋ชจ๋ธ ๋‘˜ ๋‹ค ๋‹ค์ธต ํผ์…‰ํŠธ๋ก ์œผ๋กœ ๊ตฌ์„ฑ๋จ → ๋ณต์žกํ•œ ๋„คํŠธ์›Œํฌ ํ•„์š”์—†์ด ์ˆœ์ „ํŒŒ ๋ฐ ์—ญ์ „ํŒŒ ๋“œ๋กญ์•„์›ƒ์œผ๋กœ ํ•™์Šต์ด ๊ฐ€๋Šฅํ•จ→ ์ ๋Œ€์ (adversarial) net์ด๋ผ๊ณ  ๋ถ€๋ฆ„

⇒ ๊ฒฐ๊ตญ GAN์˜ ํ•ต์‹ฌ ์ปจ์…‰์€ ๊ฐ๊ฐ์˜ ์—ญํ• ์„ ๊ฐ€์ง„ ๋‘ ๋ชจ๋ธ์„ ํ†ตํ•ด ์ ๋Œ€์  ํ•™์Šต์„ ํ•˜๋ฉด์„œ ‘์ง„์งœ ๊ฐ™์€ ๊ฐ€์งœ’๋ฅผ ์ƒ์„ฑํ•ด๋‚ด๋Š” ๋Šฅ๋ ฅ์„ ํ‚ค์›Œ์ฃผ๋Š” ๊ฒƒ

2. Adversarial nets
  • ๋ณธ๋ฌธ

    An alternative to directed graphical models with latent variables are undirected graphical models with latent variables, such as restricted Boltzmann machines (RBMs) [27, 16], deep Boltzmann machines (DBMs) [26] and their numerous variants. The interactions within such models are represented as the product of unnormalized potential functions, normalized by a global summation/integration over all states of the random variables. This quantity (the partition function) and its gradient are intractable for all but the most trivial instances, although they can be estimated by Markov chain Monte Carlo (MCMC) methods. Mixing poses a significant problem for learning algorithms that rely on MCMC [3, 5]. Deep belief networks (DBNs) [16] are hybrid models containing a single undirected layer and several directed layers. While a fast approximate layer-wise training criterion exists, DBNs incur the computational difficulties associated with both undirected and directed models. ์ž ์žฌ ๋ณ€์ˆ˜๋ฅผ ํฌํ•จํ•˜๋Š” ๋ฐฉํ–ฅ์ด ์žˆ๋Š” ๊ทธ๋ž˜ํ”ผ์ปฌ ๋ชจ๋ธ์˜ ๋Œ€์•ˆ์œผ๋กœ๋Š” ์ œํ•œ๋œ ๋ณผ์ธ ๋งŒ ๋จธ์‹ (RBM) [27, 16], ๋”ฅ ๋ณผ์ธ ๋งŒ ๋จธ์‹ (DBM) [26] ๋ฐ ๊ทธ ์—ฌ๋Ÿฌ ๋ณ€ํ˜•๊ณผ ๊ฐ™์€ ์ž ์žฌ ๋ณ€์ˆ˜๋ฅผ ํฌํ•จํ•˜๋Š” ๋ฌด๋ฐฉํ–ฅ ๊ทธ๋ž˜ํ”ผ์ปฌ ๋ชจ๋ธ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ ๋‚ด์˜ ์ƒํ˜ธ์ž‘์šฉ์€ ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ๋ชจ๋“  ์ƒํƒœ์— ๋Œ€ํ•œ ์ „์—ญ ํ•ฉ์‚ฐ/์ ๋ถ„์œผ๋กœ ์ •๊ทœํ™”๋œ ๋น„์ •๊ทœํ™”๋œ ํฌํ…์…œ ํ•จ์ˆ˜์˜ ๊ณฑ์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฐ ์–‘(๋ถ„ํ•  ํ•จ์ˆ˜)๊ณผ ๊ทธ ๊ฒฝ์‚ฌ๋Š” ๊ฐ€์žฅ ๋‹จ์ˆœํ•œ ๊ฒฝ์šฐ๋ฅผ ์ œ์™ธํ•˜๊ณ ๋Š” ๊ณ„์‚ฐํ•˜๊ธฐ ์–ด๋ ค์šฐ๋ฉฐ, ๋งˆ๋ฅด์ฝ”ํ”„ ์ฒด์ธ ๋ชฌํ…Œ์นด๋ฅผ๋กœ(MCMC) ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. MCMC์— ์˜์กดํ•˜๋Š” ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์—์„œ๋Š” ํ˜ผํ•ฉ(mixing)์ด ์ค‘์š”ํ•œ ๋ฌธ์ œ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค [3, 5]. ์‹ฌ์ธต ์‹ ๋ขฐ ๋„คํŠธ์›Œํฌ(DBN) [16]์€ ํ•˜๋‚˜์˜ ๋ฌด๋ฐฉํ–ฅ ๋ ˆ์ด์–ด์™€ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋ฐฉํ–ฅ ๋ ˆ์ด์–ด๋กœ ๊ตฌ์„ฑ๋œ ํ•˜์ด๋ธŒ๋ฆฌ๋“œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ๋น ๋ฅธ ๊ทผ์‚ฌ์ ์ธ ๋ ˆ์ด์–ด๋ณ„ ํ•™์Šต ๊ธฐ์ค€์ด ์กด์žฌํ•˜์ง€๋งŒ, DBN์€ ๋ฌด๋ฐฉํ–ฅ ๋ชจ๋ธ๊ณผ ๋ฐฉํ–ฅ ๋ชจ๋ธ ๋ชจ๋‘์— ๊ด€๋ จ๋œ ๊ณ„์‚ฐ์ ์ธ ์–ด๋ ค์›€์„ ๊ฒช์Šต๋‹ˆ๋‹ค. Alternative criteria that do not approximate or bound the log-likelihood have also been proposed, such as score matching [18] and noise-contrastive estimation (NCE) [13]. Both of these require the learned probability density to be analytically specified up to a normalization constant. Note that in many interesting generative models with several layers of latent variables (such as DBNs and DBMs), it is not even possible to derive a tractable unnormalized probability density. Some models such as denoising auto-encoders [30] and contractive autoencoders have learning rules very similar to score matching applied to RBMs. In NCE, as in this work, a discriminative training criterion is employed to fit a generative model. However, rather than fitting a separate discriminative model, the generative model itself is used to discriminate generated data from samples a fixed noise distribution. Because NCE uses a fixed noise distribution, learning slows dramatically after the model has learned even an approximately correct distribution over a small subset of the observed variables. ๋กœ๊ทธ ์šฐ๋„๋ฅผ ๊ทผ์‚ฌํ•˜๊ฑฐ๋‚˜ ์ œํ•œํ•˜๋Š” ๋Œ€์‹ ์— ๋‹ค๋ฅธ ๋Œ€์•ˆ์ ์ธ ๊ธฐ์ค€๋“ค์ด ์ œ์•ˆ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด, score matching [18] ๋ฐ noise-contrastive estimation (NCE) [13]๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ ํ•™์Šต๋œ ํ™•๋ฅ  ๋ฐ€๋„๋ฅผ ์ •๊ทœํ™” ์ƒ์ˆ˜๊นŒ์ง€ ํ•ด์„์ ์œผ๋กœ ๋ช…์‹œํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ด ๊ณตํ†ต์ ์ž…๋‹ˆ๋‹ค. ์ฃผ๋ชฉํ• ๋งŒํ•œ ๋ช‡ ๊ฐœ์˜ ์ž ์žฌ ๋ณ€์ˆ˜ ์ธต์„ ๊ฐ€์ง„ ํฅ๋ฏธ๋กœ์šด ์ƒ์„ฑ ๋ชจ๋ธ(์˜ˆ: DBN ๋ฐ DBM)์—์„œ๋Š” ์‹ฌ์ง€์–ด ๊ณ„์‚ฐ ๊ฐ€๋Šฅํ•œ ๋น„์ •๊ทœํ™”๋œ ํ™•๋ฅ  ๋ฐ€๋„๋ฅผ ์œ ๋„ํ•˜๋Š” ๊ฒƒ์กฐ์ฐจ ์–ด๋ ค์šด ๊ฒฝ์šฐ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ ์˜คํ† ์ธ์ฝ”๋” [30] ๋ฐ ์ปจํŠธ๋ž™ํ‹ฐ๋ธŒ ์˜คํ† ์ธ์ฝ”๋”์™€ ๊ฐ™์€ ๋ช‡๋ช‡ ๋ชจ๋ธ๋“ค์€ RBM์— ์ ์šฉ๋œ score matching๊ณผ ๋งค์šฐ ์œ ์‚ฌํ•œ ํ•™์Šต ๊ทœ์น™์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. NCE์—์„œ๋Š” ์ด ์ž‘์—…๊ณผ ๊ฐ™์ด ์ƒ์„ฑ ๋ชจ๋ธ์„ ํ”ผํŒ…ํ•˜๊ธฐ ์œ„ํ•ด ์‹๋ณ„์ ์ธ ํ•™์Šต ๊ธฐ์ค€์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ณ„๋„์˜ ์‹๋ณ„ ๋ชจ๋ธ์„ ํ”ผํŒ…ํ•˜๋Š” ๋Œ€์‹ ์— ์ƒ์„ฑ ๋ชจ๋ธ ์ž์ฒด๊ฐ€ ๊ณ ์ •๋œ ๋…ธ์ด์ฆˆ ๋ถ„ํฌ๋กœ๋ถ€ํ„ฐ ์ƒ์„ฑ๋œ ๋ฐ์ดํ„ฐ์™€ ์ƒ˜ํ”Œ๋“ค์„ ๊ตฌ๋ณ„ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. NCE๋Š” ๊ณ ์ •๋œ ๋…ธ์ด์ฆˆ ๋ถ„ํฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ๋ชจ๋ธ์ด ์ผ๋ถ€ ๊ด€์ธก ๋ณ€์ˆ˜์— ๋Œ€ํ•ด ์‹ค์งˆ์ ์œผ๋กœ ์˜ฌ๋ฐ”๋ฅธ ๋ถ„ํฌ๋ฅผ ํ•™์Šตํ•œ ํ›„์— ํ•™์Šต ์†๋„๊ฐ€ ํฌ๊ฒŒ ๋Š๋ ค์ง‘๋‹ˆ๋‹ค.

    Finally, some techniques do not involve defining a probability distribution explicitly, but rather train a generative machine to draw samples from the desired distribution. This approach has the advantage that such machines can be designed to be trained by back-propagation. Prominent recent work in this area includes the generative stochastic network (GSN) framework [5], which extends generalized denoising auto-encoders [4]: both can be seen as defining a parameterized Markov chain, i.e., one learns the parameters of a machine that performs one step of a generative Markov chain. Compared to GSNs, the adversarial nets framework does not require a Markov chain for sampling. Because adversarial nets do not require feedback loops during generation, they are better able to leverage piecewise linear units [19, 9, 10], which improve the performance of backpropagation but have problems with unbounded activation when used ina feedback loop. More recent examples of training a generative machine by back-propagating into it include recent work on auto-encoding variational Bayes [20] and stochastic backpropagation [24]. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ช‡ ๊ฐ€์ง€ ๊ธฐ๋ฒ•์€ ํ™•๋ฅ  ๋ถ„ํฌ๋ฅผ ๋ช…์‹œ์ ์œผ๋กœ ์ •์˜ํ•˜๋Š” ๋Œ€์‹  ์›ํ•˜๋Š” ๋ถ„ํฌ์—์„œ ์ƒ˜ํ”Œ์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์ƒ์„ฑ ๋ชจ๋ธ์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ ์ด๋Ÿฌํ•œ ๊ธฐ๊ณ„๊ฐ€ ์—ญ์ „ํŒŒ๋กœ ํ•™์Šต๋  ์ˆ˜ ์žˆ๋„๋ก ์„ค๊ณ„๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ์žฅ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ถ„์•ผ์—์„œ ์ฃผ๋ชฉํ• ๋งŒํ•œ ์ตœ๊ทผ ์—ฐ๊ตฌ๋กœ๋Š” ์ƒ์„ฑ์  ํ™•๋ฅ  ์‹ ๊ฒฝ๋ง (GSN) ํ”„๋ ˆ์ž„์›Œํฌ [5]๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ด๋Š” ์ผ๋ฐ˜ํ™”๋œ ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ ์˜คํ† ์ธ์ฝ”๋” [4]๋ฅผ ํ™•์žฅํ•œ ๊ฒƒ์œผ๋กœ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘˜์€ ๋งค๊ฐœ๋ณ€์ˆ˜ํ™”๋œ ๋งˆ๋ฅด์ฝ”ํ”„ ์ฒด์ธ์„ ์ •์˜ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋ณผ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ฆ‰, ์ƒ์„ฑ์ ์ธ ๋งˆ๋ฅด์ฝ”ํ”„ ์ฒด์ธ์˜ ํ•œ ๋‹จ๊ณ„๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ธฐ๊ณ„์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. GSN๊ณผ ๋น„๊ตํ•˜์—ฌ, ์ ๋Œ€์  ๋„คํŠธ์›Œํฌ ํ”„๋ ˆ์ž„์›Œํฌ๋Š” ์ƒ˜ํ”Œ๋ง์„ ์œ„ํ•ด ๋งˆ๋ฅด์ฝ”ํ”„ ์ฒด์ธ์„ ํ•„์š”๋กœํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ ๋Œ€์  ๋„คํŠธ์›Œํฌ๋Š” ์ƒ์„ฑ ์ค‘์— ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„๊ฐ€ ํ•„์š”ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์—, ์—ญ์ „ํŒŒ์˜ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š”๋ฐ ๋„์›€์ด ๋˜๋Š” ๋ถ„๋ฆฌ ์„ ํ˜• ์œ ๋‹› [19, 9, 10]์„ ๋” ์ž˜ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒ์„ฑ ๊ธฐ๊ณ„๋ฅผ ์—ญ์ „ํŒŒ๋กœ ํ•™์Šตํ•˜๋Š” ๋” ์ตœ๊ทผ์˜ ์˜ˆ๋กœ๋Š” ์˜คํ† ์ธ์ฝ”๋”ฉ ๋ณ€์ด ๋ฒ ์ด์ฆˆ [20] ๋ฐ ํ™•๋ฅ ์  ์—ญ์ „ํŒŒ [24]์— ๋Œ€ํ•œ ์ตœ๊ทผ ์—ฐ๊ตฌ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.

  • ํ•™์Šต ์ดˆ๋ฐ˜์—๋Š” G๊ฐ€ ์ƒ์„ฑํ•ด๋‚ด๋Š” ์ด๋ฏธ์ง€๋Š” D๊ฐ€ G๊ฐ€ ์ƒ์„ฑํ•ด๋‚ธ ๊ฐ€์งœ ์ƒ˜ํ”Œ์ธ์ง€ ์‹ค์ œ ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ์ธ์ง€ ๋ฐ”๋กœ ๊ตฌ๋ณ„ํ•  ์ˆ˜ ์žˆ์„ ๋งŒํผ ํ˜•ํŽธ์—†๊ธฐ ๋•Œ๋ฌธ์— D(G(z))์˜ ๊ฒฐ๊ณผ๊ฐ€ 0์— ๊ฐ€๊นŒ์›€.
  • ๊ทธ๋ฆฌ๊ณ  ํ•™์Šต์ด ์ง„ํ–‰๋ ์ˆ˜๋ก, G๋Š” ์‹ค์ œ ๋ฐ์ดํ„ฐ์˜ ๋ถ„ํฌ๋ฅผ ๋ชจ์‚ฌํ•˜๋ฉด์„œ D(G(z))์˜ ๊ฐ’์ด 1์ด ๋˜๋„๋ก ๋ฐœ์ „ํ•จ
  • G๋Š” ์˜ค๋ฅธ์ชฝ ์‹์„ ์ตœ์†Œํ™”ํ•˜๋Š” ๊ฒƒ์ด ๋ชฉ์ , D๋Š” ์ตœ๋Œ€ํ™” ํ•˜๋Š” ๊ฒƒ์ด ๋ชฉ์ 

2.1 ๋„๋‘‘์˜ ๊ด€์ 

  • G(z) : ๊ฐ€์งœ ์ด๋ฏธ์ง€
  • D(G(z)): ๊ฐ€์งœ ์ด๋ฏธ์ง€๊ฐ€ ๋“ค์–ด์™”์„ ๋•Œ D๊ฐ€ ์–ด๋–ป๊ฒŒ ํŒ๋ณ„ํ• ๊นŒ?

→ ๋„๋‘‘์˜ ๊ฒฝ์šฐ, ๊ฐ€์งœ ์ด๋ฏธ์ง€๊ฐ€ ๋“ค์–ด์™”์„ ๋•Œ, ์ง„์งœ ์ด๋ฏธ์ง€๋ผ๊ณ  ํŒ๋‹จํ•˜๋Š” ๊ฒƒ์ด ๋ชฉ์ ์ž„

→ ๋”ฐ๋ผ์„œ D(G(z))๊ฐ€ 1์ด ๋˜๋Š” ๊ฒƒ์ด ์ตœ์ข… ๋ชฉํ‘œ

→ ๊ทธ๋ ‡๊ฒŒ ๋˜๋ฉด log(1-1)์ด ๋จ. ์ฆ‰, ์˜ค๋ฅธ์ชฝ ์‹์ด ์ตœ์†Œํ™”๋˜๋Š” ๊ฒƒ์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ์Œ

  • D(G(z))๊ฐ€ 1์ด ๋˜๋Š” ๊ฒƒ์ด ์ข‹์€๊ฑฐ

2.2 ๊ฒฝ์ฐฐ์˜ ๊ด€์ 

  • logD(x)๊ฐ€ 1์ด ๋˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•จ → ์ง„์งœ ๋ฐ์ดํ„ฐ๊ฐ€ ๋“ค์–ด์™”์„ ๋•Œ ์ง„์งœ๋ผ๊ณ  ํŒ๋ณ„ํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ
  • D(G(z)) : ๊ฐ€์งœ๋ฅผ ์ตœ๋Œ€ํ•œ ๊ฐ€์งœ๋กœ ๊ตฌ๋ถ„ํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— 0์œผ๋กœ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ๊ฒฝ์ฐฐ์˜ ๋ชฉ์ . ์ฆ‰ log(1-0) =0 ์— ๊ฐ€๊น๋„๋ก ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์ตœ์ข… ๋ชฉํ‘œ
  • D(G(z))๊ฐ€ 0์ด ๋˜๋Š” ๊ฒƒ์ด ์ข‹์€
  • ์ฆ‰, ์ตœ๋Œ“๊ฐ’์€ 0, ์ตœ์†Ÿ๊ฐ’์€ ๋งˆ์ด๋„ˆ์Šค ๋ฌดํ•œ๋Œ€
  • ํŒŒ๋ž€์ƒ‰ ์ ์„ : discriminative distribution
  • ๊ฒ€์€์ƒ‰ ์ ์„ : data generating distribution(real)
  • ๋…น์ƒ‰ ์‹ค์„ : generative distribution(fake)
  • z(noise): x(์ด๋ฏธ์ง€) ๊ณต๊ฐ„์œผ๋กœ ๋งตํ•‘์ด ์›๋ž˜ ์ด๋ฏธ์ง€์˜ ๋ถ„ํฌ์™€ ๋‹ค๋ฅธ ๋ถ„ํฌ๋ฅผ ์ƒ์„ฑ(fake data)ex) ์ˆซ์ž ๋ฐ์ดํ„ฐ 2๋ผ๊ณ  ํ•˜๊ฒŒ ๋˜๋ฉด ์ดˆ๋ฐ˜์—๋Š” ๋ญ‰๊ฐœ์ง„ ์ˆซ์ž๋กœ ๋‚˜์˜จ๋‹ค๋Š”
  1. (a): ํ•™์Šต์ดˆ๊ธฐ์—๋Š” real๊ณผ fake์˜ ๋ถ„ํฌ๊ฐ€ ์ „ํ˜€ ๋‹ค๋ฆ„. D์˜ ์„ฑ๋Šฅ๋„ ์ฉ ์ข‹์ง€ ์•Š์Œ
  1. (b): ํ•™์Šต์‹œํ‚ฌ ๋•Œ, G ๋˜๋Š” D ์ค‘ ํ•˜๋‚˜๋ฅผ ๊ณ ์ •์‹œ์ผœ์„œ ๋‚˜๋จธ์ง€๋ฅผ ํ•™์Šต์‹œํ‚ด (์ด ๋•Œ ์—ฌ๊ธฐ์—์„œ๋Š” G๋ฅผ ๊ณ ์ •. ์ดํ›„ D๋ฅผ ํ•™์Šต์‹œํ‚จ ๊ฒƒ์ž„)D๊ฐ€ (a)์ฒ˜๋Ÿผ ๋“ค์‘ฅ๋‚ ์‘ฅํ•˜๊ฒŒ ํ™•๋ฅ ์„ ํŒ๋‹จํ•˜์ง€ ์•Š๊ณ , real๊ณผ fake๋ฅผ ๋ถ„๋ช…ํ•˜๊ฒŒ ํŒ๋ณ„ํ•ด ๋‚ด๊ณ  ์žˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ. ์ด๋Š” D๊ฐ€ ์„ฑ๋Šฅ์ด ์˜ฌ๋ผ๊ฐ„ ๊ฒƒ์ด๋ผ๊ณ  ๋งํ•  ์ˆ˜ ์žˆ์Œ
  1. (c): ์ด๋ฒˆ์—๋Š” D๋ฅผ ๊ณ ์ •ํ•˜๊ณ  G๋ฅผ ํ•™์Šต์‹œํ‚ด์–ด๋Š์ •๋„ D๊ฐ€ ํ•™์Šต์ด ์ด๋ฃจ์–ด์ง€๋ฉด, G๋Š” ์‹ค์ œ ๋ฐ์ดํ„ฐ์˜ ๋ถ„ํฌ๋ฅผ ๋ชจ์‚ฌํ•˜๋ฉฐ D๊ฐ€ ๊ตฌ๋ณ„ํ•˜๊ธฐ ํž˜๋“  ๋ฐฉํ–ฅ์œผ๋กœ ํ•™์Šต์„ ํ•จ
  1. (d): ์ด ๊ณผ์ •์˜ ๋ฐ˜๋ณต์˜ ๊ฒฐ๊ณผ๋กœ real๊ณผ fake์˜ ๋ถ„ํฌ๊ฐ€ ๊ฑฐ์˜ ๋น„์Šทํ•ด์ ธ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์—†์„ ๋งŒํผ G๊ฐ€ ํ•™์Šต์„ ํ•˜๊ฒŒ ๋˜๊ณ  ๊ฒฐ๊ตญ, D๊ฐ€ ์ด ๋‘˜์„ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์—†๊ฒŒ ๋˜์–ด ํ™•๋ฅ ์„ 1/2๋กœ ๊ณ„์‚ฐํ•˜๊ฒŒ ๋จ
3. ์ด๋ก /์ฆ๋ช…

3-1) Global Optimality of pg(G์ถœ๋ ฅ๋ถ„ํฌ) = pdata(์‹ค์ œdata)

  • (์ตœ์ ํ™” ๊ตฌํ•˜๋Š” ๋ฒ•)
  • ์–ด๋– ํ•œ G๊ฐ€ ๋“ค์–ด์˜ค๋˜์ง€ ๊ฐ„์—, ์ตœ์ ์˜ D๋Š” ๋ญ˜๊นŒ? ⇒ ์œ„์˜ ์‹์„ ๊ฐ€์งˆ ๋•Œ ๊ฐ€์žฅ ์ตœ์ ์ด๋‹ค! ๋ผ๋Š” ๊ฒƒ์„ ์ฆ๋ช…ํ•จ
  • D ์ž…์žฅ์—์„œ๋Š” V(G,D)๋ฅผ ์ตœ๋Œ€ํ™”, G์ž…์žฅ์—์„œ๋Š” ์ตœ์†Œ
  • V(G,D)๋ฅผ ์ตœ์†Œํ™” ์‹œํ‚ค๋Š” ์ตœ์ ํ•ด๊ฐ€ ๋ฌด์—‡์ผ๊นŒ
  • D*G(x)๋ฅผ ํ•œ ์ด์œ ๋Š” ์•ž์„œ D์— ๋Œ€ํ•ด ํ•œ๋ฒˆ maximize๋ฅผ ํ–ˆ๊ธฐ ๋•Œ๋ฌธ

3-2) Convergence of Algorithm 1

  • ํ•™์Šต ๊ตฌํ˜„ ๋ฐฉ๋ฒ•
  • ํ•™์Šต ๋ฐ˜๋ณต ํšŸ์ˆ˜๋งŒํผ ๋ฐ˜๋ณต(์—ํญ)
  • ๋งค ์—ํญ๋‹น k๋ฒˆ D ํ•™์Šตํ•œ ์ดํ›„์—, G ํ•™์Šต
  • D์˜ ํ•™์Šต: m๊ฐœ์˜ ๋…ธ์ด์ฆˆ๋ฅผ ๋ฝ‘๊ณ , m๊ฐœ์˜ ์›๋ณธ ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ๋ง ํ•œ ํ›„, ๊ฒฝ์‚ฌํ•˜๊ฐ•๋ฒ•์„ ํ†ตํ•ด maximize ์ง„ํ–‰ → ์›๋ณธ๋ฐ์ดํ„ฐ D(x)์— ๋Œ€ํ•ด์„œ๋Š” 1, D(G(z))์— ๋Œ€ํ•ด์„œ๋Š” 0์„ ์ถœ๋ ฅํ•˜๋„๋ก ํ•™์Šต
  • G์˜ ํ•™์Šต: m๊ฐœ์˜ ๋…ธ์ด์ฆˆ๋ฅผ ๋ฝ‘๊ณ  m๊ฐœ์˜ fake data๋ฅผ ๋งŒ๋“ค๊ณ  ๊ธฐ์šธ๊ธฐ ๊ฐ’์„ ๋‚ฎ์ถ”๋Š” ์‹์œผ๋กœ ํ•™์Šต
4. Experiments
  1. MNIST, TFD, CIFAR-10์— ๋Œ€ํ•ด ํ›ˆ๋ จ
  1. generator net๋Š” rectifier linear activation์™€ ์‹œ๊ทธ๋ชจ์ด๋“œ๋ฅผ ํ˜ผํ•ฉํ•˜์—ฌ ์‚ฌ์šฉ
  1. discriminator ํ›ˆ๋ จ์‹œ ๋“œ๋กญ์•„์›ƒ์„ ์‚ฌ์šฉํ•˜๊ณ  maxout activation์„ ์‚ฌ์šฉ
  1. ์ด๋ก ์  ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ๋Š” generator์˜ ์ค‘๊ฐ„์ธต์— ๋“œ๋กญ์•„์›ƒ๊ณผ ๋…ธ์ด์ฆˆ๋ฅผ ํ—ˆ์šฉํ•˜์ง€ ์•Š์ง€๋งŒ, ์‹คํ—˜์—์„œ๋Š” ๋งจ ํ•˜์œ„ ๊ณ„์ธต์— ๋…ธ์ด์ฆˆ input์„ ์‚ฌ์šฉํ–ˆ์Œ
  • ํ•™์Šตํ•œ ๊ฒƒ์„ ๋‹จ์ˆœ ์•”๊ธฐํ•ด์„œ ์ถœ๋ ฅํ•œ ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ์ƒ์„ฑํ–ˆ๋‹ค!
  • ๋…ธ๋ž€๋ฐ•์Šค: ํ•™์Šต ์ด๋ฏธ์ง€
5. ์žฅ๋‹จ์ 
  • ๋ณธ๋ฌธ

    This new framework comes with advantages and disadvantages relative to previous modeling frameworks. The disadvantages are primarily that there is no explicit representation of pg(x), and that D must be synchronized well with G during training (in particular, G must not be trained too much without updating D, in order to avoid “the Helvetica scenario” in which G collapses too many values of z to the same value of x to have enough diversity to model pdata), much as the negative chains of a Boltzmann machine must be kept up to date between learning steps. The advantages are that Markov chains are never needed, only backprop is used to obtain gradients, no inference is needed during learning, and a wide variety of functions can be incorporated into the model. Table 2 summarizes the comparison of generative adversarial nets with other generative modeling approaches. The aforementioned advantages are primarily computational. Adversarial models may also gain some statistical advantage from the generator network not being updated directly with data examples, but only with gradients flowing through the discriminator. This means that components of the input are not copied directly into the generator’s parameters. Another advantage of adversarial networks is that they can represent very sharp, even degenerate distributions, while methods based on Markov chains require that the distribution be somewhat blurry in order for the chains to be able to mix between modes.

    ์ด ์ƒˆ๋กœ์šด ํ”„๋ ˆ์ž„์›Œํฌ๋Š” ์ด์ „ ๋ชจ๋ธ๋ง ํ”„๋ ˆ์ž„์›Œํฌ์— ๋น„ํ•ด ์žฅ๋‹จ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ฃผ์š”ํ•œ ๋‹จ์ ์€ ๋ช…์‹œ์ ์œผ๋กœ pg(x)๋ฅผ ํ‘œํ˜„ํ•˜์ง€ ์•Š์œผ๋ฉฐ, ํ›ˆ๋ จ ์ค‘์— D๊ฐ€ G์™€ ์ž˜ ๋™๊ธฐํ™”๋˜์–ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ํŠนํžˆ, G๋ฅผ ์—…๋ฐ์ดํŠธํ•˜์ง€ ์•Š๊ณ  ๋„ˆ๋ฌด ๋งŽ์ด ํ›ˆ๋ จํ•˜๋ฉด G๊ฐ€ ์ถฉ๋ถ„ํ•œ ๋‹ค์–‘์„ฑ์„ ๊ฐ€์ง€๊ธฐ ์œ„ํ•ด z์˜ ๋งŽ์€ ๊ฐ’์„ x์˜ ๋™์ผํ•œ ๊ฐ’์œผ๋กœ ์ถ•์†Œํ•˜๋Š” "Helvetica ์‹œ๋‚˜๋ฆฌ์˜ค"๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ๋ณผ์ธ ๋งŒ ๋จธ์‹ ์˜ ๋ถ€์ •์ ์ธ ์ฒด์ธ์€ ํ•™์Šต ๋‹จ๊ณ„ ์‚ฌ์ด์—์„œ ์ตœ์‹  ์ƒํƒœ๋ฅผ ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋‹จ์ ๊ณผ๋Š” ๋Œ€์กฐ์ ์œผ๋กœ ์ด ๋ชจ๋ธ์˜ ์žฅ์ ์€ ๋งˆ๋ฅด์ฝ”ํ”„ ์ฒด์ธ์ด ํ•„์š”ํ•˜์ง€ ์•Š์œผ๋ฉฐ, ๊ธฐ์šธ๊ธฐ๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด ์—ญ์ „ํŒŒ๋งŒ ์‚ฌ์šฉ๋œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•™์Šต ์ค‘์—๋Š” ์ถ”๋ก ์ด ํ•„์š”ํ•˜์ง€ ์•Š์œผ๋ฉฐ, ๋‹ค์–‘ํ•œ ํ•จ์ˆ˜๋ฅผ ๋ชจ๋ธ์— ํ†ตํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ‘œ 2๋Š” ์ƒ์„ฑ์  ์ ๋Œ€ ์‹ ๊ฒฝ๋ง๊ณผ ๋‹ค๋ฅธ ์ƒ์„ฑ ๋ชจ๋ธ๋ง ์ ‘๊ทผ ๋ฐฉ์‹์„ ๋น„๊ตํ•œ ๊ฒƒ์„ ์š”์•ฝํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค.

    ์–ธ๊ธ‰๋œ ์žฅ์ ๋“ค์€ ์ฃผ๋กœ ๊ณ„์‚ฐ์ ์ธ ์ธก๋ฉด์—์„œ์ž…๋‹ˆ๋‹ค. ์ ๋Œ€์  ๋ชจ๋ธ์€ ์ƒ์„ฑ์ž ๋„คํŠธ์›Œํฌ๊ฐ€ ๋ฐ์ดํ„ฐ ์˜ˆ์ œ์™€ ์ง์ ‘์ ์œผ๋กœ ์—…๋ฐ์ดํŠธ๋˜์ง€ ์•Š๊ณ , ํŒ๋ณ„์ž๋ฅผ ํ†ตํ•ด ๊ทธ๋ž˜๋””์–ธํŠธ๋งŒ์ด ํ๋ฅธ๋‹ค๋Š” ์ ์—์„œ ํ†ต๊ณ„์ ์ธ ์ด์ ์„ ์–ป์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ž…๋ ฅ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์ƒ์„ฑ์ž์˜ ๋งค๊ฐœ ๋ณ€์ˆ˜๋กœ ์ง์ ‘ ๋ณต์‚ฌ๋˜์ง€ ์•Š๋Š”๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ ๋Œ€์  ๋„คํŠธ์›Œํฌ์˜ ๋˜ ๋‹ค๋ฅธ ์žฅ์ ์€ ๋งˆ๋ฅด์ฝ”ํ”„ ์ฒด์ธ์— ๊ธฐ๋ฐ˜ํ•œ ๋ฐฉ๋ฒ•๋“ค์€ ์ฒด์ธ์ด ๋ชจ๋“œ ๊ฐ„์— ํ˜ผํ•ฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๋ถ„ํฌ๊ฐ€ ๋‹ค์†Œ ํ๋ฆฟํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์„ ํ•„์š”๋กœ ํ•˜์ง€๋งŒ, ์ ๋Œ€์  ๋„คํŠธ์›Œํฌ๋Š” ๋งค์šฐ ๋‚ ์นด๋กœ์šด, ์‹ฌ์ง€์–ด ํ‡ดํ™”๋œ ๋ถ„ํฌ๋ฅผ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

  1. ๋‹จ์ 
  • D์™€ G๊ฐ€ ๊ท ํ˜•์„ ์ž˜ ๋งž์ถฐ ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋˜์–ด์•ผ ํ•จ (G๋Š” D๊ฐ€ ๋„ˆ๋ฌด ๋ฐœ์ „ํ•˜๊ธฐ ์ „์— ๋„ˆ๋ฌด ๋ฐœ์ „๋˜์–ด์„œ๋Š” ์•ˆ๋จ. G๊ฐ€ z ๋ฐ์ดํ„ฐ๋ฅผ ๋„ˆ๋ฌด ๋งŽ์ด ๋ถ•๊ดด์‹œ์ผœ๋ฒ„๋ฆฌ๊ธฐ ๋•Œ๋ฌธ)
  1. ์žฅ์ 
  • Markov chains์ด ์ „ํ˜€ ํ•„์š” ์—†๊ณ  gradients๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด back-propagation๋งŒ์ด ์‚ฌ์šฉ๋จ
  • Markov chains๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ณด๋‹ค ์„ ๋ช…ํ•œ ์ด๋ฏธ์ง€๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Œ
  • ํ•™์Šต ์ค‘ ์–ด๋– ํ•œ inference๊ฐ€ ํ•„์š” ์—†์Œ
  • ๋‹ค์–‘ํ•œ ํ•จ์ˆ˜๋“ค์ด ๋ชจ๋ธ์ด ์ ‘๋ชฉ๋  ์ˆ˜ ์žˆ์Œ
  • โ€ป ๋งˆ์ฝ”๋ธŒ ์ฒด์ธ
    • ๋งˆ์ฝ”๋ธŒ ์ฒด์ธ์€ ํ˜„์žฌ ์ƒํƒœ๊ฐ€ ์ด์ „ ์ƒํƒœ์—๋งŒ ์˜์กดํ•˜๋Š” ํŠน์„ฑ์„ ๊ฐ€์ง€๋ฉฐ, ์ด์ „ ์ƒํƒœ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋‹ค์Œ ์ƒํƒœ๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ํ™•๋ฅ  ๋ชจ๋ธ
728x90
๋ฐ˜์‘ํ˜•

'Deep Learning > [๋…ผ๋ฌธ] Paper Review' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

cGAN/Pix2Pix  (0) 2023.07.07
R-CNN  (0) 2023.07.06
AE  (0) 2023.07.06
SPPNet  (0) 2023.07.06
Faster R-CNN  (0) 2023.07.06