[null,null,["最后更新时间 (UTC):2025-02-26。"],[[["\u003cp\u003eGenerative Adversarial Networks (GANs) consist of a generator and a discriminator, where the generator learns to create realistic fake data by incorporating feedback from the discriminator.\u003c/p\u003e\n"],["\u003cp\u003eThe generator takes random noise as input and transforms it into a meaningful output, aiming to fool the discriminator into classifying it as real.\u003c/p\u003e\n"],["\u003cp\u003eGenerator training involves backpropagating through both the discriminator and generator to adjust only the generator's weights, while keeping the discriminator's weights fixed.\u003c/p\u003e\n"],["\u003cp\u003eThe generator is penalized for producing samples that the discriminator classifies as fake, driving it to generate increasingly realistic data.\u003c/p\u003e\n"],["\u003cp\u003eThe training process involves a continuous interplay between the generator and discriminator, with each trying to outperform the other.\u003c/p\u003e\n"]]],[],null,["# The Generator\n\n\u003cbr /\u003e\n\nThe generator part of a GAN learns to create fake\ndata by incorporating feedback from the discriminator. It learns to make the\ndiscriminator classify its output as real.\n\nGenerator training requires tighter integration between the generator and the\ndiscriminator than discriminator training requires. The portion of the GAN that\ntrains the generator includes:\n\n- random input\n- generator network, which transforms the random input into a data instance\n- discriminator network, which classifies the generated data\n- discriminator output\n- generator loss, which penalizes the generator for failing to fool the discriminator\n\n**Figure 1: Backpropagation in generator training.**\n\nRandom Input\n------------\n\nNeural networks need some form of input. Normally we input data that we want to\ndo something with, like an instance that we want to classify or make a\nprediction about. But what do we use as input for a network that outputs\nentirely new data instances?\n\nIn its most basic form, a GAN takes random noise as its input. The generator then\ntransforms this noise into a meaningful output. By introducing noise, we can get\nthe GAN to produce a wide variety of data, sampling from different places in the\ntarget distribution.\n\nExperiments suggest that the distribution of the noise doesn't matter much, so\nwe can choose something that's easy to sample from, like a uniform distribution.\nFor convenience the space from which the noise is sampled is usually of smaller dimension than\nthe dimensionality of the output space.\n| **Note:** Some GANs use non-random input to shape the output. See [GAN\n| Variations](/machine-learning/gan/applications).\n\nUsing the Discriminator to Train the Generator\n----------------------------------------------\n\nTo train a neural net, we alter the net's weights to reduce the error or loss of\nits output. In our GAN, however, the generator is not directly connected to the\nloss that we're trying to affect. The generator feeds into the discriminator\nnet, and the *discriminator* produces the output we're trying to affect. The\ngenerator loss penalizes the generator for producing a sample that the\ndiscriminator network classifies as fake.\n\nThis extra chunk of network must be included in backpropagation. Backpropagation\nadjusts each weight in the right direction by calculating the weight's impact on\nthe output --- how the output would change if you changed the weight. But the\nimpact of a generator weight depends on the impact of the discriminator weights\nit feeds into. So backpropagation starts at the output and flows back through\nthe discriminator into the generator.\n\nAt the same time, we don't want the discriminator to\nchange during generator training. Trying to hit a moving target would make a\nhard problem even harder for the generator.\n\nSo we train the generator with the following procedure:\n\n1. Sample random noise.\n2. Produce generator output from sampled random noise.\n3. Get discriminator \"Real\" or \"Fake\" classification for generator output.\n4. Calculate loss from discriminator classification.\n5. Backpropagate through both the discriminator and generator to obtain gradients.\n6. Use gradients to change only the generator weights.\n\nThis is one iteration of generator training. In the next section we'll see how\nto juggle the training of both the generator and the discriminator."]]