电子开发网站,wordpress云播放,企业做网站需要什么,如何编写一个网站生成对抗网络是什么
概念
Generative Adversarial Nets#xff0c;简称GAN GAN#xff1a;生成对抗网络 —— 一种可以生成特定分布数据的模型 《Generative Adversarial Nets》 Ian J Goodfellow-2014
GAN网络结构
Recent Progress on Generative Adversarial Networks …生成对抗网络是什么
概念
Generative Adversarial Nets简称GAN GAN生成对抗网络 —— 一种可以生成特定分布数据的模型 《Generative Adversarial Nets》 Ian J Goodfellow-2014
GAN网络结构
Recent Progress on Generative Adversarial Networks (GANs): A Survey
How Generative Adversarial Networks and Their Variants Work: An Overview
Generative Adversarial Networks_ A Survey and Taxonomy GAN的训练
训练目的
对于D对真样本输出高概率对于G输出使D会给出高概率的数据 GAN 的训练和监督学习训练模式的差异
在监督学习的训练模式中训练数经过模型得到输出值然后使用损失函数计算输出值与标签之间的差异根据差异值进行反向传播更新模型的参数如下图所示。 在 GAN 的训练模式中Generator 接收随机数得到输出值目标是让输出值的分布与训练数据的分布接近但是这里不是使用人为定义的损失函数来计算输出值与训练数据分布之间的差异而是使用 Discriminator 来计算这个差异。需要注意的是这个差异不是单个数字上的差异而是分布上的差异。如下图所示。
具体训练过程
step1训练D 输入真实数据加G生成的假数据 输出二分类概率
step2训练G 输入随机噪声z 输出分类概率——D(G(z)) DCGAN
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
Discriminator卷积结构的模型 Generator卷积结构的模型
DCGAN 的定义如下
from collections import OrderedDict
import torch
import torch.nn as nnclass Generator(nn.Module):def __init__(self, nz100, ngf128, nc3):super(Generator, self).__init__()self.main nn.Sequential(# input is Z, going into a convolutionnn.ConvTranspose2d(nz, ngf * 8, 4, 1, 0, biasFalse),nn.BatchNorm2d(ngf * 8),nn.ReLU(True),# state size. (ngf*8) x 4 x 4nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, biasFalse),nn.BatchNorm2d(ngf * 4),nn.ReLU(True),# state size. (ngf*4) x 8 x 8nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, biasFalse),nn.BatchNorm2d(ngf * 2),nn.ReLU(True),# state size. (ngf*2) x 16 x 16nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, biasFalse),nn.BatchNorm2d(ngf),nn.ReLU(True),# state size. (ngf) x 32 x 32nn.ConvTranspose2d(ngf, nc, 4, 2, 1, biasFalse),nn.Tanh()# state size. (nc) x 64 x 64)def forward(self, input):return self.main(input)def initialize_weights(self, w_mean0., w_std0.02, b_mean1, b_std0.02):for m in self.modules():classname m.__class__.__name__if classname.find(Conv) ! -1:nn.init.normal_(m.weight.data, w_mean, w_std)elif classname.find(BatchNorm) ! -1:nn.init.normal_(m.weight.data, b_mean, b_std)nn.init.constant_(m.bias.data, 0)class Discriminator(nn.Module):def __init__(self, nc3, ndf128):super(Discriminator, self).__init__()self.main nn.Sequential(# input is (nc) x 64 x 64nn.Conv2d(nc, ndf, 4, 2, 1, biasFalse),nn.LeakyReLU(0.2, inplaceTrue),# state size. (ndf) x 32 x 32nn.Conv2d(ndf, ndf * 2, 4, 2, 1, biasFalse),nn.BatchNorm2d(ndf * 2),nn.LeakyReLU(0.2, inplaceTrue),# state size. (ndf*2) x 16 x 16nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, biasFalse),nn.BatchNorm2d(ndf * 4),nn.LeakyReLU(0.2, inplaceTrue),# state size. (ndf*4) x 8 x 8nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, biasFalse),nn.BatchNorm2d(ndf * 8),nn.LeakyReLU(0.2, inplaceTrue),# state size. (ndf*8) x 4 x 4nn.Conv2d(ndf * 8, 1, 4, 1, 0, biasFalse),nn.Sigmoid())def forward(self, input):return self.main(input)def initialize_weights(self, w_mean0., w_std0.02, b_mean1, b_std0.02):for m in self.modules():classname m.__class__.__name__if classname.find(Conv) ! -1:nn.init.normal_(m.weight.data, w_mean, w_std)elif classname.find(BatchNorm) ! -1:nn.init.normal_(m.weight.data, b_mean, b_std)nn.init.constant_(m.bias.data, 0)