分类数据:特征组合练习
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
Playground 是一个交互式应用,可让您操控机器学习模型训练和测试的各个方面。借助 Playground,您可以选择特征并调整超参数,然后了解您的选择对模型有何影响。
本页包含两个 Playground 练习。
练习 1:基本特征组合
对于本练习,请重点关注 Playground 界面的以下部分:
- 在“特征”下方,请注意三个可能的模型特征:
- 在“输出”下方,您会看到一个包含橙点和蓝点的方块。假设您正在看一个方形森林,其中橙色圆点标记了患病树的位置,蓝色圆点标记了健康树的位置。
- 如果您仔细观察“特征”和“输出”之间的区域,会看到三条细小的虚线,它们将每个特征与输出连接起来。
每条虚线的宽度表示当前与每个地图项关联的权重。这些线条非常细微,因为每个特征的初始权重都初始化为 0。随着粗细度的增大或缩小,这些线条的粗细度也会随之增大或缩小。
任务 1:执行以下操作,探索 Playground:
- 点击将特征 x1 连接到输出的细线。系统随即会显示一个弹出式窗口。
- 在弹出式窗口中,输入重量
1.0
。
- 按 Enter 键。
请注意以下几点:
- 随着权重从 0 增加到 1.0,x1 的虚线会变粗。
- 现在,系统会显示一个橙色和蓝色的背景。
- 橙色背景是模型对患病树木位置的猜测。
- 蓝色背景是模型对健康树木所在位置的猜测。模型的表现非常糟糕;大约一半的猜测结果都是错误的。
- 由于 x1 的权重为 1.0,而其他特征的权重为 0,因此该模型会与 x1 的值完全匹配。
任务 2:更改三个特征中的任一特征或所有特征的权重,使模型(背景颜色)成功预测出哪些树是健康的,哪些树是生病的。解决方案会显示在 Kotlin 园地下方。
点击此处查看任务 2 的解决方案
- w1 = 0
- w2 = 0
- x1 x2 = 任何正值
出于好奇,如果您为地图项交叉输入负值,会发生什么情况?
练习 2:更复杂的特征组合
对于第二个练习,请查看输出模型中橙色圆点(生病的树)和蓝色圆点(健康的树)的排列方式,并注意以下几点:
- 这些点形成大致球形的图案。
- 点的排列很杂乱;例如,请注意橙点外圈偶尔会出现蓝点。因此,即使是出色的模型,也不可能正确预测每个数据点。
任务 1:执行以下操作,探索 Playground 界面:
- 点击“运行/暂停”按钮,该按钮是一个黑色圆圈内白色三角形。Playground 将开始训练模型;观察“Epochs”计数器的增加情况。
- 在系统训练至少 300 个周期后,按同一“运行/暂停”按钮可暂停训练。
- 查看模型。模型的预测结果准确吗?换句话说,蓝点通常是蓝色背景环绕,橙点通常是橙色背景环绕吗?
- 检查“输出”下方的“测试损失”值。此值是更接近 1.0(损失更高)还是更接近 0(损失更低)?
- 按“运行/暂停”按钮左侧的弯曲箭头,重置 Playground。
任务 2:通过以下操作构建更好的模型:
- 选择或取消选择五种可能的功能的任意组合。
- 调整学习速率。
- 至少训练系统 500 个周期。
- 检查“测试损失”的值。您能否将测试损失降低到 0.2 以下?
解决方案会显示在 Kotlin 园地下方。
点击此处查看任务 1 的解决方案
模型很糟糕。例如,请注意许多橙色圆点都漂浮在蓝色的海洋中。此外,测试损失非常高。
点击此处查看任务 2 的解决方案
您可以通过以下方式提高模型性能:
-
选择两个多项式转换(x12 和 x22),然后取消选择其他三个可能的特征。
-
将学习速率降低到
0.001
或更低。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-02-06。
[null,null,["最后更新时间 (UTC):2025-02-06。"],[[["\u003cp\u003ePlayground is an interactive tool for experimenting with machine learning models by adjusting features and hyperparameters to observe their impact.\u003c/p\u003e\n"],["\u003cp\u003eExercise 1 focuses on feature crosses and manipulating feature weights to achieve accurate model predictions of sick and healthy trees.\u003c/p\u003e\n"],["\u003cp\u003eExercise 2 explores model training, the influence of features and learning rate on model performance, and minimizing test loss for better predictions.\u003c/p\u003e\n"]]],[],null,["# Categorical data: Feature cross exercises\n\n[Playground](https://playground.tensorflow.org/) is an\ninteractive application that lets you manipulate various\naspects of training and testing a machine learning model.\nWith Playground, you can select features and adjust hyperparameters,\nand then discover how your choices influence a model.\n\nThis page contains two Playground exercises.\n\nExercise 1: A basic feature cross\n---------------------------------\n\nFor this exercise, focus on the following parts of the Playground\nuser interface:\n\n- Underneath FEATURES, notice the three potential model features:\n - x~1~\n - x~2~\n - x~1~x~2~\n- Underneath OUTPUT, you'll see a square containing orange and blue dots. Imagine that you're looking at a square forest, where orange dots mark the position of sick trees and blue dots mark the position of healthy trees.\n- Between FEATURES and OUTPUT, if you look very closely, you'll see three faint dashed lines connecting each feature to the output. The width of each dashed line symbolizes the weight currently associated with each feature. These lines are very faint because the starting weight for each feature is initialized to 0. As the weight grows or shrinks, so will the thickness of these lines.\n\n**Task 1:** Explore Playground by doing the following:\n\n1. Click on the faint line that connects feature x~1~ to the output. A popup appears.\n2. In the popup, enter the weight `1.0`.\n3. Press Enter.\n\nNotice the following:\n\n- The dashed line for x~1~ becomes thicker as the weight increases from 0 to 1.0.\n- An orange and blue background now appears.\n - The orange background is the model's guesses as to where the sick trees are.\n - The blue background is the model's guesses as to where the healthy trees are. The model is doing a terrible job; about half of the model's guesses are wrong.\n- Because the weight is 1.0 for x~1~ and 0 for the other features, the model matches x~1~'s values exactly.\n\n**Task 2:** Change the weights of any or all of the three features so that the\nmodel (the background colors) successfully predicts sick and healthy\ntrees. The solution appears just below Playground.\n\n\u003cbr /\u003e\n\n\n*** ** * ** ***\n\n*** ** * ** ***\n\n**Click here for the solution to Task 2** \n- *w~1~* = 0\n- *w~2~* = 0\n- *x~1~* *x~2~* = any positive value\n\nJust for fun, what happens if you input a negative value for the feature\ncross?\n\nExercise 2: A more sophisticated feature cross\n----------------------------------------------\n\nFor the second exercise, look at the arrangement of orange dots (sick trees)\nand blue dots (healthy trees) in the output model, noticing the following:\n\n- The dots form roughly spherical patterns.\n- The arrangement of dots is noisy; for example, notice the occasional blue dots in the outer sphere of orange dots. Consequently, even a great model is unlikely to correctly predict each dot.\n\n**Task 1:** Explore the Playground UI by doing the following:\n\n1. Click the Run/Pause button, which is a white triangle inside a black circle. Playground will begin training the model; observe the Epochs counter increasing.\n2. After the system has trained for at least 300 epochs, press that same Run/Pause button to pause the training.\n3. Look at the model. Is the model making good predictions? In other words, are the blue dots generally surrounded by a blue background, and are the orange dots generally surrounded by an orange background?\n4. Examine the value of Test loss, which appears just below OUTPUT. Is this value closer to 1.0 (higher loss) or closer to 0 (lower loss)?\n5. Reset Playground by pressing the curvy arrow to the left of the Run/Pause button.\n\n**Task 2:** Build a better model by doing the following:\n\n1. Select or deselect any combination of the five possible features.\n2. Adjust the learning rate.\n3. Train the system for at least 500 epochs.\n4. Examine the value of Test loss. Can you get a Test loss less than 0.2?\n\nSolutions appear below Playground.\n\n*** ** * ** ***\n\n*** ** * ** ***\n\n**Click here for the solution to Task 1** \nThe model is terrible. Notice, for example, that many of the orange\ndots are swimming in a sea of blue. Furthermore, Test loss is very high.\n**Click here for the solution to Task 2** \n\nYou can improve model performance by doing the following:\n\n- Select both polynomial transforms (x~1~^2^ and x~2~^2^) and unselect the other three possible features.\n- Reduce the learning rate to `0.001` or lower. \n[Help Center](https://support.google.com/machinelearningeducation)"]]