類神經網路:互動式練習
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
透過下方的互動式練習,您可以進一步探索
類神經網路首先介紹參數和超參數的變更
會影響網路的預測結果接著您將運用所學
用來適應非線性資料
運動 1
下列小工具會使用下列設定來設定類神經網路:
- 有 3 個神經元的輸入層,內含值
0.00
、0.00
和 0.00
- 有 4 個神經元的隱藏層
- 含有 1 個神經元的輸出層
- ReLU 活化函數套用至
所有隱藏層節點和輸出節點
查看網路的初始設定 (注意:請不要點選 ▶️ 或
>| 按鈕),然後完成小工具下方的工作。
工作 1
類神經網路模型的三個輸入特徵值
0.00
。按一下網路中的每個節點,即可查看所有已初始化的節點
輕鬆分配獎金輕觸「播放」按鈕 (▶️) 前,請先回答以下問題:
您屬於何種輸出值?
產值會是正面、負面或 0?
輸出正值
您選擇了 位正面
輸出值。請按照下列指示操作
推論輸入資料,看看結果是否準確。
輸出值負值
您選擇了陰性
輸出值。請按照下列指示操作
推論輸入資料,看看結果是否準確。
輸出值 0
您選擇了「輸出內容」
值為 0。請按照下列指示操作
推論輸入資料,看看結果是否準確。
現在請點選網路上方的「播放」按鈕 (▶️),然後觀看所有隱藏層
並填入輸出節點值上方答案是否正確?
按這裡查看說明
您取得的確切輸出值會因權重
和偏誤參數會隨機初始化不過,由於每個神經元
值為 0,表示系統會根據權重來計算
隱藏層節點值舉例來說
隱藏層節點的計算方式如下:
y = ReLU(w11* 0.00 + w21* 0.00 + w31* 0.00 + b)
y = 關係學習(b)
因此每個隱藏層節點的值都等於
偏誤 (b);b 時為 0,如果 b 為負數,則為 0;如果 b 為 0 或
正面影響。
輸出節點的值計算方式如下:
y = ReLU(寬11* x11 + 寬21* x21
+ 寬31* x31 + 寬41* x41 + b)
工作 2
修改類神經網路之前,請先回答下列問題:
如果新增其他隱藏層
傳送至類神經網路,為這個新的第 3 層節點提供
輸入和權重/偏誤參數相同計算
會受到影響嗎?
所有節點
除了輸入節點以外
您選了「所有的」
為網路中的節點 (除了輸入節點除外)跟著
請按照以下指示更新類神經網路
正確。
只是
第一個隱藏層
您只選擇了
第一個隱藏層的節點請按照下列指示操作
然後更新類神經網路,看看問題是否正確
只有輸出節點
您只選擇了
輸出節點請按照以下指示更新
看看結果是否已正確
現在修改類神經網路,新增含有 3 個節點的隱藏層,如下所示:
- 按一下文字「1 個隱藏圖層」左側的「+」按鈕,即可新增圖層
隱藏資料層
- 在新的隱藏層上方按兩下「+」按鈕,即可再新增 2 個節點
新增至該圖層
上方答案是否正確?
按這裡查看說明
只有輸出節點會變更。因為在這個類神經網路的推論
是「feed-forward」(從開始到結束的計算進度)、加上
新資料層的可用性只會在新層後受到影響
而非前面的資料層
工作 3
在網路的第一個隱藏層,按一下第二個節點 (從頂端開始)
圖表。變更網路設定之前,請先回答
下列問題:
如果您將
權重 w12 (顯示於第一個輸入節點下方,x1)
某些輸入值可能會受到影響
價值觀?
無
您選擇了「無」
。請按照下方操作說明更新類神經網路
確認是否正確無誤
第二個節點
第一層隱藏層、第二個隱藏層中的所有節點
輸出節點
您選出第二項
也就是第二個隱藏層中的所有節點
和輸出節點請按照下列指示操作
然後更新類神經網路,看看問題是否正確
位於
第二個隱藏層、第二個隱藏層,以及輸出層
您選了「所有的」
第二個隱藏層、第二個隱藏層
輸出層請按照下列指示操作
然後更新類神經網路,看看問題是否正確
接著,按一下權重 w12 (顯示在
第一個輸入節點 (x1)),將其值變更為 5.00
,然後按下 Enter 鍵。
觀察圖表的變化。
你的答案是否正確?驗證答案時請務必謹慎:
值沒有變化,這是否表示基礎計算沒有改變?
按這裡查看說明
在第一個隱藏層中,唯一受影響的節點是第二個節點 (
也就是您按下的按鈕)。計算第一個值時
隱藏層不包含 w12 做為參數
可能會受到影響第二個隱藏層中的所有節點都會受到影響,因為
計算依據為第一個節點的值
隱藏層同樣地,輸出節點值也會受到影響
計算方式取決於第二個隱藏層中的節點值。
您認為答案是否「無」因為這個叢集中沒有任何節點值
您變更權重值時會發生什麼情況?請注意
節點的計算結果可能會變動,但不會變更節點的值
(例如:ReLU(0) 和 ReLU(–5) 都會產生 0 的輸出內容。
請勿假設只有
包括節點值請務必檢查計算結果
運動 2
在「功能交叉練習」中
「類別資料模組」
您手動建構的特徵會交錯組合,以符合非線性資料。
瞭解您能不能建構可自動學習的類神經網路
如何在訓練過程中適應非線性資料
您的工作:設定類神經網路,可以將橘點與
下圖中的藍點,兩個儲存格中的
訓練和測試資料
Instructions:
在下方的互動式小工具中:
- 透過實驗修改類神經網路超參數
以下設定:
- 按一下「+」和「-」按鈕,即可新增或移除隱藏圖層
位於網路圖表中「HIDDEN LAYERS」標題的左側。
- 如要新增或移除隱藏層中的神經元,請按一下「+」和「-」
按鈕。
- 如要變更學習率,請從學習率中選取新的值
下拉式選單
- 如要變更啟用函式,請從
圖表上方的「Activation」(啟用) 下拉式選單。
- 點選圖表上方的「播放」按鈕 (▶️) 來訓練類神經網路
運用指定參數建立模型
- 觀察模型的視覺化呈現方式,將資料調整為訓練範圍
相關進度和
測試損失和
訓練損失值
「Output」(輸出) 區段。
- 如果模型的測試和訓練資料損失不會低於 0.2,
請按一下重設,然後使用不同的設定重複步驟 1 到 3
可以管理叢集設定,像是節點
資源調度、安全性和其他預先設定項目重複這個流程,直到達到偏好的結果為止。
按一下這裡查看解決方案
我們透過以下方式,使測試與訓練的損失達到 0.2 以下:
- 新增 1 個包含 3 個神經元的隱藏層。
- 選擇 0.01 的學習率。
- 選擇 ReLU 的活化函數。
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
上次更新時間:2024-08-16 (世界標準時間)。
[null,null,["上次更新時間:2024-08-16 (世界標準時間)。"],[],[],null,["# Neural networks: Interactive exercises\n\nIn the interactive exercises below, you'll further explore the inner workings of\nneural networks. First, you'll see how parameter and hyperparameter changes\naffect the network's predictions. Then you'll use what you've learned to train a\nneural network to fit nonlinear data.\n\nExercise 1\n----------\n\nThe following widget sets up a neural network with the following configuration:\n\n- Input layer with 3 neurons containing the values `0.00`, `0.00`, and `0.00`\n- Hidden layer with 4 neurons\n- Output layer with 1 neuron\n- [**ReLU**](/machine-learning/glossary#ReLU) activation function applied to all hidden layer nodes and the output node\n\nReview the initial setup of the network (note: **do not** click the **▶️** or\n**\\\u003e\\|** buttons yet), and then complete the tasks below the widget.\n\n### Task 1\n\nThe values for the three input features to the neural network model are all\n`0.00`. Click each of the nodes in the network to see all the initialized\nvalues. Before hitting the Play (**▶️**) button, answer this question: \nWhat kind of output value do you think will be produced: positive, negative, or 0? \nPositive output value \nYou chose **positive\noutput value**. Follow the instructions below to perform inference on the input data and see if you're right. \nNegative output value \nYou chose **negative\noutput value**. Follow the instructions below to perform inference on the input data and see if you're right. \nOutput value of 0 \nYou chose **output\nvalue of 0**. Follow the instructions below to perform inference on the input data and see if you're right.\n\nNow click the Play (▶️) button above the network, and watch all the hidden-layer\nand output node values populate. Was your answer above correct? \n**Click here for an explanation**\n\nThe exact output value you get will vary based on how the weight\nand bias parameters are randomly initialized. However, since each neuron\nin the input layer has a value of 0, the weights used to calculate the\nhidden-layer node values will all be zeroed out. For example, the first\nhidden layer node calculation will be:\n\ny = ReLU(w~11~\\* 0.00 + w~21~\\* 0.00 + w~31~\\* 0.00 + b)\n\ny = ReLU(b)\n\nSo each hidden-layer node's value will be equal to the ReLU value of the\nbias (b), which will be 0 if b is negative and b itself if b is 0 or\npositive.\n\nThe value of the output node will then be calculated as follows:\n\ny = ReLU(w~11~\\* x~11~ + w~21~\\* x~21~\n+ w~31~\\* x~31~ + w~41~\\* x~41~ + b)\n\n### Task 2\n\nBefore modifying the neural network, answer the following question: \nIf you add another hidden layer to the neural network after the first hidden layer, and give this new layer 3 nodes, keeping all input and weight/bias parameters the same, which other nodes' calculations will be affected? \nAll the nodes in the network, except the input nodes \nYou chose **all the\nnodes in the network, except the input nodes**. Follow the instructions below to update the neural network and see if you're correct. \nJust the nodes in the first hidden layer \nYou chose **just the\nnodes in the first hidden layer**. Follow the instructions below to update the neural network and see if you're correct. \nJust the output node \nYou chose **just the\noutput node**. Follow the instructions below to update the neural network and see if you're correct.\n\nNow modify the neural network to add a new hidden layer with 3 nodes as follows:\n\n1. Click the **+** button to the left of the text **1 hidden layer** to add a new hidden layer before the output layer.\n2. Click the **+** button above the new hidden layer twice to add 2 more nodes to the layer.\n\nWas your answer above correct? \n**Click here for an explanation**\n\nOnly the output node changes. Because inference for this neural network\nis \"feed-forward\" (calculations progress from start to finish), the addition\nof a new layer to the network will only affect nodes *after* the new\nlayer, not those that precede it.\n\n### Task 3\n\nClick the second node (from the top) in the first hidden layer of the network\ngraph. Before making any changes to the network configuration, answer the\nfollowing question: \nIf you change the value of the weight w~12~ (displayed below the first input node, x~1~), which other nodes' calculations *could* be affected for some input values? \nNone \nYou chose **none**. Follow the instructions below to update the neural network and see if you're correct. \nThe second node in the first hidden layer, all the nodes in the second hidden layer, and the output node. \nYou chose **the second\nnode in the first hidden layer, all the nodes in the second hidden layer,\nand the output node**. Follow the instructions below to update the neural network and see if you're correct. \nAll the nodes in the first hidden layer, the second hidden layer, and the output layer. \nYou chose **all the\nnodes in the first hidden layer, the second hidden layer, and the\noutput layer**. Follow the instructions below to update the neural network and see if you're correct.\n\nNow, click in the text field for the weight w~12~ (displayed below the\nfirst input node, x~1~), change its value to `5.00`, and hit Enter.\nObserve the updates to the graph.\n\nWas your answer correct? Be careful when verifying your answer: if a node\nvalue doesn't change, does that mean the underlying calculation didn't change? \n**Click here for an explanation**\n\nThe only node affected in the first hidden layer is the second node (the\none you clicked). The value calculations for the other nodes in the first\nhidden layer do not contain w~12~ as a parameter, so they are not\naffected. All the nodes in the second hidden layer are affected, as their\ncalculations depend on the value of the second node in the first\nhidden layer. Similarly, the output node value is affected because its\ncalculations depend on the values of the nodes in the second hidden layer.\n\nDid you think the answer was \"none\" because none of the node values in the\nnetwork changed when you changed the weight value? Note that an underlying\ncalculation for a node may change without changing the node's value\n(e.g., ReLU(0) and ReLU(--5) both produce an output of 0).\nDon't make assumptions about how the network was affected just by\nlooking at the node values; make sure to review the calculations as well.\n\nExercise 2\n----------\n\nIn the [Feature cross exercises](/machine-learning/crash-course/categorical-data/feature-cross-exercises)\nin the [Categorical data module](/machine-learning/crash-course/categorical-data),\nyou manually constructed feature crosses to fit nonlinear data.\nNow, you'll see if you can build a neural network that can automatically learn\nhow to fit nonlinear data during training.\n\n**Your task:** configure a neural network that can separate the orange dots from\nthe blue dots in the diagram below, achieving a loss of less than 0.2 on both\nthe training and test data.\n\n**Instructions:**\n\nIn the interactive widget below:\n\n1. Modify the neural network hyperparameters by experimenting with some of the following config settings:\n - Add or remove hidden layers by clicking the **+** and **-** buttons to the left of the **HIDDEN LAYERS** heading in the network diagram.\n - Add or remove neurons from a hidden layer by clicking the **+** and **-** buttons above a hidden-layer column.\n - Change the learning rate by choosing a new value from the **Learning rate** drop-down above the diagram.\n - Change the activation function by choosing a new value from the **Activation** drop-down above the diagram.\n2. Click the Play (▶️) button above the diagram to train the neural network model using the specified parameters.\n3. Observe the visualization of the model fitting the data as training progresses, as well as the [**Test loss**](/machine-learning/glossary#test-loss) and [**Training loss**](/machine-learning/glossary#training-loss) values in the **Output** section.\n4. If the model does not achieve loss below 0.2 on the test and training data, click reset, and repeat steps 1--3 with a different set of configuration settings. Repeat this process until you achieve the preferred results.\n\n**Click here for our solution**\n\nWe were able to achieve both test and training loss below 0.2 by:\n\n- Adding 1 hidden layer containing 3 neurons.\n- Choosing a learning rate of 0.01.\n- Choosing an activation function of ReLU. \n| **Key terms:**\n|\n| - [Test loss](/machine-learning/glossary#test-loss)\n- [Training loss](/machine-learning/glossary#training-loss) \n[Help Center](https://support.google.com/machinelearningeducation)"]]