โดยทั่วไปแล้ว โมเดลที่ฝึกจากชุดข้อมูลขนาดใหญ่ซึ่งมีฟีเจอร์ไม่มากนักจะมีประสิทธิภาพดีกว่าโมเดลที่ฝึกจากชุดข้อมูลขนาดเล็กซึ่งมีฟีเจอร์จํานวนมาก
ที่ผ่านมา Google ประสบความสำเร็จอย่างมากในการฝึกโมเดลแบบง่ายในชุดข้อมูลขนาดใหญ่
[null,null,["อัปเดตล่าสุด 2024-11-14 UTC"],[[["\u003cp\u003eA machine learning model's performance is heavily reliant on the quality and quantity of the dataset it's trained on, with larger, high-quality datasets generally leading to better results.\u003c/p\u003e\n"],["\u003cp\u003eDatasets can contain various data types, including numerical, categorical, text, multimedia, and embedding vectors, each requiring specific handling for optimal model training.\u003c/p\u003e\n"],["\u003cp\u003eMaintaining data quality involves addressing issues like label errors, noisy features, and proper filtering to ensure the reliability of the dataset for accurate predictions.\u003c/p\u003e\n"],["\u003cp\u003eIncomplete examples with missing feature values should be handled by either deletion or imputation to avoid negatively impacting model training.\u003c/p\u003e\n"],["\u003cp\u003eWhen imputing missing values, use reliable methods like mean/median imputation and consider adding an indicator column to signal imputed values to the model.\u003c/p\u003e\n"]]],[],null,["# Datasets: Data characteristics\n\nA [**dataset**](/machine-learning/glossary#dataset) is a collection of\n[**examples**](/machine-learning/glossary#example).\n\nMany datasets store data in tables (grids), for example, as\ncomma-separated values (CSV) or directly from spreadsheets or\ndatabase tables. Tables are an intuitive input format for machine\nlearning [**models**](/machine-learning/glossary#model).\nYou can imagine each row of the table as an example\nand each column as a potential feature or label.\nThat said, datasets may also be derived from other formats, including\nlog files and protocol buffers.\n\nRegardless of the format, your ML model is only as good as the\ndata it trains on. This section examines key data characteristics.\n\nTypes of data\n-------------\n\nA dataset could contain many kinds of datatypes, including but certainly\nnot limited to:\n\n- numerical data, which is covered in a [separate\n unit](/machine-learning/crash-course/numerical-data)\n- categorical data, which is covered in a [separate\n unit](/machine-learning/crash-course/categorical-data)\n- human language, including individual words and sentences, all the way up to entire text documents\n- multimedia (such as images, videos, and audio files)\n- outputs from other ML systems\n- [**embedding vectors**](/machine-learning/glossary#embedding-vector), which are covered in a later unit\n\nQuantity of data\n----------------\n\nAs a rough rule of thumb, your model should train on at least an order\nof magnitude (or two) more examples than trainable parameters. However, good\nmodels generally train on *substantially* more examples than that.\n\nModels trained on large datasets with few\n[**features**](/machine-learning/glossary#feature)\ngenerally outperform models trained on small datasets with\na lot of features.\nGoogle has historically had great success training simple models on\nlarge datasets.\n\nDifferent datasets for different machine learning programs may require wildly\ndifferent amounts of examples to build a useful model. For some relatively\nsimple problems, a few dozen examples might be sufficient. For other problems,\na trillion examples might be insufficient.\n\nIt's possible to get good results from a small dataset if you are adapting\nan existing model already trained on large quantities of data from the\nsame schema.\n\nQuality and reliability of data\n-------------------------------\n\nEveryone prefers high quality to low quality, but quality is such a vague\nconcept that it could be defined many different ways. This course defines\n**quality** pragmatically:\n\u003e A high-quality dataset helps your model accomplish its goal.\n\u003e A low quality dataset inhibits your model from accomplishing its goal.\n\nA high-quality dataset is usually also reliable.\n**Reliability** refers to the degree to which you can *trust* your data.\nA model trained on a reliable dataset is more likely to yield useful\npredictions than a model trained on unreliable data.\n\nIn *measuring* reliability, you must determine:\n\n- How common are label errors? For example, if your data is labeled by humans, how often did your human raters make mistakes?\n- Are your features *noisy*? That is, do the values in your features contain errors? Be realistic---you can't purge your dataset of all noise. Some noise is normal; for example, GPS measurements of any location always fluctuate a little, week to week.\n- Is the data properly filtered for your problem? For example, should your dataset include search queries from bots? If you're building a spam-detection system, then likely the answer is yes. However, if you're trying to improve search results for humans, then no.\n\nThe following are common causes of unreliable data in datasets:\n\n- Omitted values. For example, a person forgot to enter a value for a house's age.\n- Duplicate examples. For example, a server mistakenly uploaded the same log entries twice.\n- Bad feature values. For example, someone typed an extra digit, or a thermometer was left out in the sun.\n- Bad labels. For example, a person mistakenly labeled a picture of an oak tree as a maple tree.\n- Bad sections of data. For example, a certain feature is very reliable, except for that one day when the network kept crashing.\n\nWe recommend using automation to flag unreliable data. For example,\nunit tests that define or rely on an external formal data schema can\nflag values that fall outside of a defined range.\n| **Note:** Any sufficiently large or diverse dataset almost certainly contains [**outliers**](/machine-learning/glossary#outliers) that fall outside your data schema or unit test bands. Determining how to handle outliers is an important part of machine learning. The [**Numerical data\n| unit**](/machine-learning/crash-course/numerical-data) details how to handle numeric outliers.\n\nComplete vs. incomplete examples\n--------------------------------\n\nIn a perfect world, each example is **complete**; that is, each example contains\na value for each feature.\n**Figure 1.** A complete example.\n\nUnfortunately, real-world examples are often **incomplete**, meaning that at\nleast one feature value is missing.\n**Figure 2.** An incomplete example.\n\nDon't train a model on incomplete examples. Instead, fix or eliminate\nincomplete examples by doing one of the following:\n\n- Delete incomplete examples.\n- [**Impute**](/machine-learning/glossary#value-imputation) missing values; that is, convert the incomplete example to a complete example by providing well-reasoned guesses for the missing values.\n\n**Figure 3.** Deleting incomplete examples from the dataset.\n\n**Figure 4.** Imputing missing values for incomplete examples.\n\nIf the dataset contains enough complete examples to train a useful model,\nthen consider deleting the incomplete examples.\nSimilarly, if only one feature is missing a significant amount of data and that\none feature probably can't help the model much, then consider deleting\nthat feature from the model inputs and seeing how much quality is lost by its\nremoval. If the model works just or almost as well without it, that's great.\nConversely, if you don't have enough complete examples to train a useful model,\nthen you might consider imputing missing values.\n\nIt's fine to delete useless or redundant examples, but it's bad to delete\nimportant examples. Unfortunately, it can be difficult to differentiate\nbetween useless and useful examples. If you can't decide whether\nto delete or impute, consider building two datasets: one formed by deleting\nincomplete examples and the other by imputing.\nThen, determine which dataset trains the better model.\n\n#### Click the icon to learn more about imputation handling.\n\n\nClever algorithms can impute some pretty good missing values;\nhowever, imputed values are rarely as good as the actual values.\nTherefore, a good dataset tells the model which values are imputed and\nwhich are actual. One way to do this is to add an extra Boolean column\nto the dataset that indicates whether a particular feature's value\nis imputed. For example, given a feature named `temperature`,\nyou could add an extra Boolean feature named something like\n`temperature_is_imputed`. Then, during training, the model will\nprobably gradually learn to trust examples containing imputed values for\nfeature `temperature` *less* than examples containing\nactual (non-imputed) values.\n\n*** ** * ** ***\n\n| Imputation is the process of generating well-reasoned data, not random or deceptive data. Be careful: good imputation can improve your model; bad imputation can hurt your model.\n\nOne common algorithm is to use the mean or median as the imputed value.\nConsequently, when you represent a numerical feature with\n[**Z-scores**](/machine-learning/glossary#z-score-normalization), then\nthe imputed value is typically 0 (because 0 is generally the mean Z-score).\n\n### Exercise: Check your understanding\n\n| A sorted dataset, like the one in the following exercise, can sometimes simplify imputation. However, it is a bad idea to train on a sorted dataset. So, after imputation, randomize the order of examples in the training set.\n\nHere are two columns of a dataset sorted by `Timestamp`.\n\n| Timestamp | Temperature |\n|--------------------|-------------|\n| June 8, 2023 09:00 | 12 |\n| June 8, 2023 10:00 | 18 |\n| June 8, 2023 11:00 | missing |\n| June 8, 2023 12:00 | 24 |\n| June 8, 2023 13:00 | 38 |\n\nWhich of the following would be a reasonable value to impute\nfor the missing value of Temperature? \n23 \nProbably. 23 is the mean of the adjacent values (12, 18, 24, and 38). However, we aren't seeing the rest of the dataset, so it is possible that 23 would be an outlier for 11:00 on other days. \n31 \nUnlikely. The limited part of the dataset that we can see suggests that 31 is much too high for the 11:00 Temperature. However, we can't be sure without basing the imputation on a larger number of examples. \n51 \nVery unlikely. 51 is much higher than any of the displayed values (and, therefore, much higher than the mean).\n| **Key terms:**\n|\n| - [Dataset](/machine-learning/glossary#dataset)\n| - [Embedding vector](/machine-learning/glossary#embedding-vector)\n| - [Example](/machine-learning/glossary#example)\n| - [Feature](/machine-learning/glossary#feature)\n| - [Model](/machine-learning/glossary#model)\n| - [Value imputation](/machine-learning/glossary#value-imputation)\n- [Z-score normalization](/machine-learning/glossary#z-score-normalization) \n[Help Center](https://support.google.com/machinelearningeducation)"]]