Method 3.1 Objective 3.2 Network architectures 3.2.1 Generator with skips 3.2.2 Markovian discriminator (PatchGAN) 3.3 Optimization and inference
Experiments 4.1 Evaluation metrics 4.2 Analysis of the objective function 4.3 Analysis of the generator architecture 4.4 From PixelGANs to PatchGANs to ImageGANs 4.5 Perceptual validation 4.6 Semantic segmentation 4.7 Community-driven Research
Conclusion
摘要
原文
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.