[null,null,["最后更新时间 (UTC):2025-08-29。"],[[["\u003cp\u003eML Kit allows you to use custom TensorFlow Lite models with the Image Labeling and Object Detection & Tracking APIs for more targeted machine learning solutions.\u003c/p\u003e\n"],["\u003cp\u003eThese custom models offer benefits like easy-to-use APIs, automatic label mapping, support for models from various sources (including TensorFlow Hub and AutoML Vision Edge), and Firebase hosting capabilities.\u003c/p\u003e\n"],["\u003cp\u003ePre-trained models from TensorFlow Hub or custom-trained models built using AutoML Vision Edge, TensorFlow Lite Model Maker, or the TensorFlow Lite Converter are compatible with ML Kit.\u003c/p\u003e\n"],["\u003cp\u003eCustom models must adhere to specific requirements regarding input/output tensors, metadata, and data formats to ensure compatibility with ML Kit.\u003c/p\u003e\n"]]],["ML Kit APIs support replacing default models with custom TensorFlow Lite models for image labeling and object detection. Benefits include easy-to-use APIs, automatic label mapping, and compatibility with various model sources like TensorFlow Hub, AutoML Vision Edge, and TensorFlow Lite Model Maker. You can use pre-trained models from TensorFlow Hub or train your own. Model requirements involve specific tensor formats and metadata, including normalization for FLOAT32 input and label maps for output classes.\n"],null,["By default, ML Kit's APIs make use of Google trained machine learning models.\nThese models are designed to cover a wide range of applications. However, some\nuse cases require models that are more targeted. That is why some ML Kit APIs\nnow allow you to replace the default models with custom TensorFlow Lite models.\n\nBoth the [Image Labeling](/ml-kit/vision/image-labeling) and the\n[Object Detection \\& Tracking](/ml-kit/vision/object-detection) API\noffer support for custom image classification models. They are compatible with a\nselection of high-quality pre-trained models on TensorFlow Hub or your own\ncustom model trained with TensorFlow, AutoML Vision Edge or TensorFlow Lite\nModel Maker.\n\nIf you need a custom solution for other domains or use-cases, visit the\n[On-device Machine Learning page](/learn/topics/on-device-ml) for guidance on\nall of Google's solutions and tools for on-device machine learning.\n\nBenefits of using ML Kit with custom models\n\nThe benefits for using a custom image classification model with ML Kit are:\n\n- **Easy-to-use high level APIs** - No need to deal with low-level model input/output, handle image pre-/post-processing or building a processing pipeline.\n- **No need to worry about label mapping yourself**, ML Kit extracts the labels from TFLite model metadata and does the mapping for you.\n- **Supports custom models from a wide range of sources**, from pre-trained models published on TensorFlow Hub to new models trained with TensorFlow, AutoML Vision Edge or TensorFlow Lite Model Maker.\n- **Supports models hosted with Firebase**. Reduces APK size by downloading models on demand. Push model updates without republishing your app and perform easy A/B testing with Firebase Remote Config.\n- Optimized for integration with **Android's Camera APIs**.\n\nAnd, specifically for [Object Detection and Tracking](/ml-kit/vision/object-detection):\n\n- **Improve classification accuracy** by locating the objects first and only run the classifier on the related image area.\n- **Provide a real-time interactive experience** by providing your users immediate feedback on objects as they are being detected and classified.\n\nUse a pre-trained image classification model\n\nYou can use pre-trained TensorFlow Lite models, provided they meet a\n[set of criteria](#model-compatibility). Through TensorFlow Hub we are offering\na set of vetted models - from Google or other model creators - that meet these\ncriteria.\n\nUse a model published on TensorFlow Hub\n\n[TensorFlow Hub](https://tfhub.dev) offers a wide range of pre-trained image\nclassification models - from various model creators - that can be used with the\nImage Labeling and Object Detection and Tracking APIs. Follow these steps.\n\n1. Pick a model from the [collection of ML Kit compatible models](https://tfhub.dev/ml-kit/collections/image-classification/1).\n2. Download the .tflite model file from the model details page. Where available, pick a model format with metadata.\n3. Follow our guides for the [Image Labeling API](/ml-kit/vision/image-labeling#custom-tflite) or [Object Detection and Tracking API](/ml-kit/vision/object-detection#custom-tflite) on how to bundle model file with your project and use it in your Android or iOS application.\n\nTrain your own image classification model\n\nIf no pre-trained image classification model fits your needs, there are various\nways to train your own TensorFlow Lite model, some of which are outlined and\ndiscussed in more detail below.\n\n| Options to train your own image classification model ||\n|---------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|\n| **AutoML Vision Edge** | - Offered through Google Cloud AI - Create state-of the art image classification models - Easily evaluate between performance and size |\n| **TensorFlow Lite Model Maker** | - Re-train a model (transfer learning), takes less time and requires less data than training a model from scratch |\n| **Convert a TensorFlow model to TensorFlow Lite** | - Train a model with TensorFlow and then convert it to TensorFlow Lite |\n\nAutoML Vision Edge\n\nImage classification models trained using [AutoML Vision Edge](https://cloud.google.com/vision/automl/docs/edge-quickstart)\nare supported by the custom models in the\n[Image Labeling](/ml-kit/vision/image-labeling) and\n[Object Detection and Tracking API](/ml-kit/vision/object-detection)\nAPIs. These APIs also support download of models that are hosted with\n[Firebase model deployment](https://firebase.google.com/docs/ml/use-custom-models).\n\nTo learn more about how to use a model trained with AutoML Vision Edge in your\nAndroid and iOS apps, follow the custom model guides for each API, depending\non your use case.\n| **Note:** ML Kit only supports custom image classification models. Although AutoML Vision allows training of object detection models, these cannot be used with ML Kit.\n\nTensorFlow Lite Model Maker\n\nThe TFLite Model Maker library simplifies the process of adapting and converting\na TensorFlow neural-network model to particular input data when deploying this\nmodel for on-device ML applications. You can follow the [Colab for Image classification with TensorFlow Lite Model Maker](https://ai.google.dev/edge/litert/libraries/modify/image_classification).\n\nTo learn more about how to use a model trained with Model Maker in your Android\nand iOS apps, follow our guides for the [Image Labeling API](/ml-kit/vision/image-labeling)\nor the [Object Detection and Tracking API](/ml-kit/vision/object-detection),\ndepending on your use case.\n\nModels created using TensorFlow Lite converter\n\nIf you have an existing TensorFlow image classification model, you can convert\nit using the [TensorFlow Lite converter](https://www.tensorflow.org/lite/convert).\nPlease ensure the model created meets the compatibility requirements below.\n\nTo learn more about how to use a TensorFlow Lite model in your Android and iOS\napps, follow our guides for the [Image Labeling API](/ml-kit/vision/image-labeling)\nor the [Object Detection and Tracking API](/ml-kit/vision/object-detection),\ndepending on your use case.\n\nTensorFlow Lite model compatibility\n\nYou can use any pre-trained TensorFlow Lite image classification\nmodel, provided it meets these requirements:\n\nTensors\n\n- The model must have only one input tensor with the following constraints:\n - The data is in RGB pixel format.\n - The data is UINT8 or FLOAT32 type. If the input tensor type is FLOAT32, it must specify the NormalizationOptions by attaching [Metadata](#metadata).\n - The tensor has 4 dimensions : BxHxWxC, where:\n - B is the batch size. It must be 1 (inference on larger batches is not supported).\n - W and H are the input width and height.\n - C is the number of expected channels. It must be 3.\n- The model must have at least one output tensor with N classes and either 2 or 4 dimensions:\n - (1xN)\n - (1x1x1xN)\n- Currently only single-head models are fully supported. Multi-head models may output unexpected results.\n\nMetadata\n\nYou can add metadata to the TensorFlow Lite file as explained in\n[Adding metadata to TensorFlow Lite model](https://www.tensorflow.org/lite/convert/metadata).\n\nTo use a model with FLOAT32 input tensor, you must specify the\n[NormalizationOptions](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L295-L318)\nin the metadata.\n\nWe also recommend that you attach this metadata to the output tensor [TensorMetadata](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L396-L397):\n\n- A label map specifying the name of each output class, as an [AssociatedFile](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L434-L438) with type [TENSOR_AXIS_LABELS](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L42-L54) (otherwise only the numerical output class indices can be returned)\n- A default score threshold below which results are considered too low-confidence to be returned, as a [ProcessUnit](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L422-L429) with [ScoreThresholdingOptions](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L360-L366)"]]