判别器
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
GAN 中的判别器只是一个分类器。它会尝试区分真实数据与生成器创建的数据。它可以使用适合其所分类数据类型的任何网络架构。

图 1:鉴别器训练中的反向传播。
判别器训练数据
分类器的训练数据来自两个来源:
- 真实数据实例,例如真实的人物照片。在训练过程中,分类器会将这些实例用作正例。
- 生成器创建的虚构数据实例。在训练期间,鉴别器会将这些实例用作负例。
在图 1 中,两个“Sample”(示例)框代表输入到分类器的两个数据源。在鉴别器训练期间,生成器不会训练。其权重保持不变,同时会生成示例以供判别器进行训练。
训练判别器
分类器连接到两个损失函数。在判别器训练期间,判别器会忽略生成器损失,而只使用判别器损失。我们会在生成器训练期间使用生成器损失,如下一部分所述。
在判别器训练期间:
- 判别器会对生成器生成的真实数据和虚假数据进行分类。
- 如果判别器将真实实例误分类为虚假实例,或将虚假实例误分类为真实实例,判别器损失函数会对其进行惩罚。
- 判别器通过判别器网络从判别器损失进行反向传播来更新其权重。
在下一部分中,我们将了解为什么生成器损失与分类器相关联。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-02-26。
[null,null,["最后更新时间 (UTC):2025-02-26。"],[[["\u003cp\u003eThe discriminator in a GAN is a classifier that distinguishes real data from fake data generated by the generator.\u003c/p\u003e\n"],["\u003cp\u003eDiscriminator training involves using real and fake data to update its weights through backpropagation and minimize the discriminator loss.\u003c/p\u003e\n"],["\u003cp\u003eDuring discriminator training, the generator's weights remain constant, and the generator loss is ignored.\u003c/p\u003e\n"]]],[],null,["# The Discriminator\n\n\u003cbr /\u003e\n\nThe discriminator in a GAN is simply a classifier. It tries to distinguish real\ndata from the data created by the generator. It could use any network\narchitecture appropriate to the type of data it's classifying.\n\n**Figure 1: Backpropagation in discriminator training.**\n\nDiscriminator Training Data\n---------------------------\n\nThe discriminator's training data comes from\ntwo sources:\n\n- **Real data** instances, such as real pictures of people. The discriminator uses these instances as positive examples during training.\n- **Fake data** instances created by the generator. The discriminator uses these instances as negative examples during training.\n\nIn Figure 1, the two \"Sample\" boxes represent these two data sources feeding\ninto the discriminator. During discriminator training the generator does not\ntrain. Its weights remain constant while it produces examples for the\ndiscriminator to train on.\n\nTraining the Discriminator\n--------------------------\n\nThe discriminator connects to two [loss](/machine-learning/glossary#loss)\nfunctions. During discriminator training, the discriminator ignores the\ngenerator loss and just uses the discriminator loss. We use the generator loss\nduring generator training, as described in [the next section](/machine-learning/gan/generator).\n\nDuring discriminator training:\n\n1. The discriminator classifies both real data and fake data from the generator.\n2. The discriminator loss penalizes the discriminator for misclassifying a real instance as fake or a fake instance as real.\n3. The discriminator updates its weights through [backpropagation](https://developers.google.com/machine-learning/glossary/#b) from the discriminator loss through the discriminator network.\n\nIn the next section we'll see why the generator loss connects to the\ndiscriminator."]]