神经网络:互动练习
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
在下面的互动练习中,您将进一步探索
神经网络。首先,您将看到参数和超参数如何变化
会影响网络的预测。然后,您将利用学到的知识来
来拟合非线性数据。
练习 1
以下 widget 会设置具有以下配置的神经网络:
- 包含 3 个神经元的输入层,分别包含值
0.00
、0.00
和 0.00
- 包含 4 个神经元的隐藏层
- 包含 1 个神经元的输出层
- ReLU 激活函数应用于
所有隐藏层节点和
查看广告联盟的初始设置(注意:请勿点击 ▶️ 或
>| 按钮),然后完成该微件下方的任务。
任务 1
神经网络模型的三个输入特征的值
0.00
。点击网络中的每个节点,
值。在点击“播放”(▶️) 按钮之前,请先回答以下问题:
您会获得什么样的输出值
预测将生成的内容:正值、负值还是 0?
正输出值
您选择了肯定
输出值。请按照以下说明操作
对输入数据进行推理,看看结果是否正确。
输出为负值
您选择了否定匹配
输出值。请按照以下说明操作
对输入数据进行推理,看看结果是否正确。
输出值 0
您选择了输出
值 0。请按照以下说明操作
对输入数据进行推理,看看结果是否正确。
现在,点击网络上方的播放 (▶️) 按钮,然后观察所有隐藏层
和输出节点值进行填充。以上答案正确吗?
点击此处查看说明
您获得的确切输出值因
和偏差参数会随机初始化。不过,由于每个神经元
的权重值是 0,
隐藏层节点值都将清零。例如,第一个
隐藏层节点的计算公式为:
y = ReLU(w11* 0.00 + w21* 0.00 + w31* 0.00 + b)
y = ReLU(b)
因此,每个隐藏层节点的值都等于
偏差 (b),如果 b 为负值,则值为 0;如果 b 为 0,则值为 b 本身;
积极。
然后,输出节点的值将按如下方式计算:
y = ReLU(w11* x11 + w21* x21)
+ w31* x31 + w41* x41 + b)
任务 2
在修改神经网络之前,请先回答以下问题:
如果再添加一个隐藏层,
在第一个隐藏层之后传递到神经网络,并为这个新的第 3 层节点赋予 3 个节点,
输入和权重/偏差参数相同,计算
是否会受到影响?
所有节点
(输入节点除外)
您已选择所有
网络中的节点(输入节点除外)。按照
更新神经网络,看看
正确。
只有
第一个隐藏层
您选择了
第一个隐藏层中的节点。请按照以下说明操作
更新神经网络,看看回答是否正确。
只有输出节点
您选择了
输出节点。请按照以下说明更新
看看您的回答是否正确。
现在,修改神经网络以添加一个包含 3 个节点的新隐藏层,如下所示:
- 点击文本 1 个隐藏层左侧的 + 按钮,添加新层
隐藏层。
- 点击新隐藏层上方的 + 按钮两次可再添加 2 个节点
传递给层。
以上答案正确吗?
点击此处查看说明
只有输出节点会发生变化。因为这种神经网络的推理
为“前馈”(计算从开始到结束),加上
只会影响新层的节点,
而不是其前面的层。
任务 3
点击网络第一个隐藏层中的第二个节点(从顶部开始)
图表。在对网络配置进行任何更改之前,请先回答
以下问题:
如果您更改
权重 w12(显示在第一个输入节点 x1 下方),
哪些其他节点的某些输入值可能会影响计算结果
值?
无
您选择了无
。请按照以下说明更新神经网络
看看您是否正确。
第二个节点
第一个隐藏层、第二个隐藏层中的所有节点,
输出节点。
您选择了第二个
第一个隐藏层中的所有节点、第二个隐藏层中的所有节点。
和输出节点。请按照以下说明操作
更新神经网络,看看回答是否正确。
集群中的所有节点
第一个隐藏层、第二个隐藏层和输出层。
您已选择所有
第一个隐藏层、第二个隐藏层中的节点以及
输出层。请按照以下说明操作
更新神经网络,看看回答是否正确。
现在,点击权重 w12(显示在
第一个输入节点 x1),将其值更改为 5.00
,然后按 Enter 键。
观察图表的更新。
您的答案正确吗?验证答案时要小心:如果节点
值没有变化,这是否意味着基础计算没有变化?
点击此处查看说明
在第一个隐藏层中,唯一受影响的节点是第二个节点(
)。第一个
隐藏层未包含 w12 作为参数,因此它们
。第二个隐藏层中的所有节点都会受到影响,
取决于第一个节点中第二个节点的值
隐藏层。同样,输出节点值也会受到影响,
则取决于第二个隐藏层中节点的值。
您是否认为答案是“无”因为这个 Deployment 中的
网络在您更改权值时发生了变化?请注意,底层的
的计算可能会在不改变节点值的情况下
(例如,ReLU(0) 和 ReLU(-5) 产生的输出均为 0。
不要仅仅假设网络受到的影响
查看节点值请务必查看计算结果
练习 2
在特征交叉练习中,
请参阅“分类数据”模块
您手动构建了特征组合来拟合非线性数据。
现在,看看你能否构建一个
如何在训练期间拟合非线性数据。
您的任务:配置一个能够将橙点与白点分开的神经网络
下图中的蓝点,两者的损失都小于 0.2
训练数据和测试数据。
说明:
在下面的互动式 widget 中:
- 使用一些超参数进行实验,从而修改神经网络超参数
以下配置设置之一:
- 通过点击 + 和 - 按钮来添加或移除隐藏层:
(位于网络图中隐藏层标题的左侧)。
- 点击 + 和 - 在隐藏层中添加或移除神经元
按钮。
- 通过从学习速率中选择新值来更改学习速率
下拉菜单。
- 更改激活函数,方法是从
激活下拉菜单。
- 点击图表上方的播放 (▶️) 按钮可训练神经网络
使用指定的参数训练该模型。
- 观察与训练数据拟合的模型的可视化效果
以及
测试损失和
训练损失值,位于
输出部分。
- 如果模型在测试和训练数据上的损失达不到 0.2,
点击“重置”,然后以不同的一组配置重复执行第 1-3 步
设置。重复此过程,直至达到您期望的结果。
点击此处即可查看我们的解决方案
我们通过以下方式使测试损失和训练损失均低于 0.2:
- 添加了 1 个包含 3 个神经元的隐藏层。
- 将学习速率设为 0.01。
- 选择 ReLU 的激活函数。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2024-08-16。
[null,null,["最后更新时间 (UTC):2024-08-16。"],[],[],null,["# Neural networks: Interactive exercises\n\nIn the interactive exercises below, you'll further explore the inner workings of\nneural networks. First, you'll see how parameter and hyperparameter changes\naffect the network's predictions. Then you'll use what you've learned to train a\nneural network to fit nonlinear data.\n\nExercise 1\n----------\n\nThe following widget sets up a neural network with the following configuration:\n\n- Input layer with 3 neurons containing the values `0.00`, `0.00`, and `0.00`\n- Hidden layer with 4 neurons\n- Output layer with 1 neuron\n- [**ReLU**](/machine-learning/glossary#ReLU) activation function applied to all hidden layer nodes and the output node\n\nReview the initial setup of the network (note: **do not** click the **▶️** or\n**\\\u003e\\|** buttons yet), and then complete the tasks below the widget.\n\n### Task 1\n\nThe values for the three input features to the neural network model are all\n`0.00`. Click each of the nodes in the network to see all the initialized\nvalues. Before hitting the Play (**▶️**) button, answer this question: \nWhat kind of output value do you think will be produced: positive, negative, or 0? \nPositive output value \nYou chose **positive\noutput value**. Follow the instructions below to perform inference on the input data and see if you're right. \nNegative output value \nYou chose **negative\noutput value**. Follow the instructions below to perform inference on the input data and see if you're right. \nOutput value of 0 \nYou chose **output\nvalue of 0**. Follow the instructions below to perform inference on the input data and see if you're right.\n\nNow click the Play (▶️) button above the network, and watch all the hidden-layer\nand output node values populate. Was your answer above correct? \n**Click here for an explanation**\n\nThe exact output value you get will vary based on how the weight\nand bias parameters are randomly initialized. However, since each neuron\nin the input layer has a value of 0, the weights used to calculate the\nhidden-layer node values will all be zeroed out. For example, the first\nhidden layer node calculation will be:\n\ny = ReLU(w~11~\\* 0.00 + w~21~\\* 0.00 + w~31~\\* 0.00 + b)\n\ny = ReLU(b)\n\nSo each hidden-layer node's value will be equal to the ReLU value of the\nbias (b), which will be 0 if b is negative and b itself if b is 0 or\npositive.\n\nThe value of the output node will then be calculated as follows:\n\ny = ReLU(w~11~\\* x~11~ + w~21~\\* x~21~\n+ w~31~\\* x~31~ + w~41~\\* x~41~ + b)\n\n### Task 2\n\nBefore modifying the neural network, answer the following question: \nIf you add another hidden layer to the neural network after the first hidden layer, and give this new layer 3 nodes, keeping all input and weight/bias parameters the same, which other nodes' calculations will be affected? \nAll the nodes in the network, except the input nodes \nYou chose **all the\nnodes in the network, except the input nodes**. Follow the instructions below to update the neural network and see if you're correct. \nJust the nodes in the first hidden layer \nYou chose **just the\nnodes in the first hidden layer**. Follow the instructions below to update the neural network and see if you're correct. \nJust the output node \nYou chose **just the\noutput node**. Follow the instructions below to update the neural network and see if you're correct.\n\nNow modify the neural network to add a new hidden layer with 3 nodes as follows:\n\n1. Click the **+** button to the left of the text **1 hidden layer** to add a new hidden layer before the output layer.\n2. Click the **+** button above the new hidden layer twice to add 2 more nodes to the layer.\n\nWas your answer above correct? \n**Click here for an explanation**\n\nOnly the output node changes. Because inference for this neural network\nis \"feed-forward\" (calculations progress from start to finish), the addition\nof a new layer to the network will only affect nodes *after* the new\nlayer, not those that precede it.\n\n### Task 3\n\nClick the second node (from the top) in the first hidden layer of the network\ngraph. Before making any changes to the network configuration, answer the\nfollowing question: \nIf you change the value of the weight w~12~ (displayed below the first input node, x~1~), which other nodes' calculations *could* be affected for some input values? \nNone \nYou chose **none**. Follow the instructions below to update the neural network and see if you're correct. \nThe second node in the first hidden layer, all the nodes in the second hidden layer, and the output node. \nYou chose **the second\nnode in the first hidden layer, all the nodes in the second hidden layer,\nand the output node**. Follow the instructions below to update the neural network and see if you're correct. \nAll the nodes in the first hidden layer, the second hidden layer, and the output layer. \nYou chose **all the\nnodes in the first hidden layer, the second hidden layer, and the\noutput layer**. Follow the instructions below to update the neural network and see if you're correct.\n\nNow, click in the text field for the weight w~12~ (displayed below the\nfirst input node, x~1~), change its value to `5.00`, and hit Enter.\nObserve the updates to the graph.\n\nWas your answer correct? Be careful when verifying your answer: if a node\nvalue doesn't change, does that mean the underlying calculation didn't change? \n**Click here for an explanation**\n\nThe only node affected in the first hidden layer is the second node (the\none you clicked). The value calculations for the other nodes in the first\nhidden layer do not contain w~12~ as a parameter, so they are not\naffected. All the nodes in the second hidden layer are affected, as their\ncalculations depend on the value of the second node in the first\nhidden layer. Similarly, the output node value is affected because its\ncalculations depend on the values of the nodes in the second hidden layer.\n\nDid you think the answer was \"none\" because none of the node values in the\nnetwork changed when you changed the weight value? Note that an underlying\ncalculation for a node may change without changing the node's value\n(e.g., ReLU(0) and ReLU(--5) both produce an output of 0).\nDon't make assumptions about how the network was affected just by\nlooking at the node values; make sure to review the calculations as well.\n\nExercise 2\n----------\n\nIn the [Feature cross exercises](/machine-learning/crash-course/categorical-data/feature-cross-exercises)\nin the [Categorical data module](/machine-learning/crash-course/categorical-data),\nyou manually constructed feature crosses to fit nonlinear data.\nNow, you'll see if you can build a neural network that can automatically learn\nhow to fit nonlinear data during training.\n\n**Your task:** configure a neural network that can separate the orange dots from\nthe blue dots in the diagram below, achieving a loss of less than 0.2 on both\nthe training and test data.\n\n**Instructions:**\n\nIn the interactive widget below:\n\n1. Modify the neural network hyperparameters by experimenting with some of the following config settings:\n - Add or remove hidden layers by clicking the **+** and **-** buttons to the left of the **HIDDEN LAYERS** heading in the network diagram.\n - Add or remove neurons from a hidden layer by clicking the **+** and **-** buttons above a hidden-layer column.\n - Change the learning rate by choosing a new value from the **Learning rate** drop-down above the diagram.\n - Change the activation function by choosing a new value from the **Activation** drop-down above the diagram.\n2. Click the Play (▶️) button above the diagram to train the neural network model using the specified parameters.\n3. Observe the visualization of the model fitting the data as training progresses, as well as the [**Test loss**](/machine-learning/glossary#test-loss) and [**Training loss**](/machine-learning/glossary#training-loss) values in the **Output** section.\n4. If the model does not achieve loss below 0.2 on the test and training data, click reset, and repeat steps 1--3 with a different set of configuration settings. Repeat this process until you achieve the preferred results.\n\n**Click here for our solution**\n\nWe were able to achieve both test and training loss below 0.2 by:\n\n- Adding 1 hidden layer containing 3 neurons.\n- Choosing a learning rate of 0.01.\n- Choosing an activation function of ReLU. \n| **Key terms:**\n|\n| - [Test loss](/machine-learning/glossary#test-loss)\n- [Training loss](/machine-learning/glossary#training-loss) \n[Help Center](https://support.google.com/machinelearningeducation)"]]