Label images with a custom model on iOS

You can use ML Kit to recognize entities in an image and label them. This API supports a wide range of custom image classification models. Please refer to Custom models with ML Kit for guidance on model compatibility requirements, where to find pre-trained models, and how to train your own models.

There are two ways to integrate a custom model. You can bundle the model by putting it inside your app’s asset folder, or you can dynamically download it from Firebase. The following table compares the two options.

Bundled Model Hosted Model
The model is part of your app's APK, which increases its size. The model is not part your APK. It is hosted by uploading to Firebase Machine Learning.
The model is available immediately, even when the Android device is offline The model is downloaded on demand
No need for a Firebase project Requires a Firebase project
You must republish your app to update the model Push model updates without republishing your app
No built-in A/B testing Easy A/B testing with Firebase Remote Config

Try it out

Before you begin

  1. Include the ML Kit libraries in your Podfile:

    For bundling a model with your app:

    pod 'GoogleMLKit/ImageLabelingCustom', '7.0.0'
    

    For dynamically downloading a model from Firebase, add the LinkFirebase dependency:

    pod 'GoogleMLKit/ImageLabelingCustom', '7.0.0'
    pod 'GoogleMLKit/LinkFirebase', '7.0.0'
    
  2. After you install or update your project's Pods, open your Xcode project using its .xcworkspace. ML Kit is supported in Xcode version 13.2.1 or higher.

  3. If you want to download a model, make sure you add Firebase to your iOS project, if you have not already done so. This is not required when you bundle the model.

1. Load the model

Configure a local model source

To bundle the model with your app:

  1. Copy the model file (usually ending in .tflite or .lite) to your Xcode project, taking care to select Copy bundle resources when you do so. The model file will be included in the app bundle and available to ML Kit.

  2. Create LocalModel object, specifying the path to the model file:

    Swift

    let localModel = LocalModel(path: localModelFilePath)

    Objective-C

    MLKLocalModel *localModel =
        [[MLKLocalModel alloc] initWithPath:localModelFilePath];

Configure a Firebase-hosted model source

To use the remotely-hosted model, create an RemoteModel object, specifying the name you assigned the model when you published it:

Swift

let firebaseModelSource = FirebaseModelSource(
    name: "your_remote_model") // The name you assigned in
                               // the Firebase console.
let remoteModel = CustomRemoteModel(remoteModelSource: firebaseModelSource)

Objective-C

MLKFirebaseModelSource *firebaseModelSource =
    [[MLKFirebaseModelSource alloc]
        initWithName:@"your_remote_model"]; // The name you assigned in
                                            // the Firebase console.
MLKCustomRemoteModel *remoteModel =
    [[MLKCustomRemoteModel alloc]
        initWithRemoteModelSource:firebaseModelSource];

Then, start the model download task, specifying the conditions under which you want to allow downloading. If the model isn't on the device, or if a newer version of the model is available, the task will asynchronously download the model from Firebase:

Swift

let downloadConditions = ModelDownloadConditions(
  allowsCellularAccess: true,
  allowsBackgroundDownloading: true
)

let downloadProgress = ModelManager.modelManager().download(
  remoteModel,
  conditions: downloadConditions
)

Objective-C

MLKModelDownloadConditions *downloadConditions =
    [[MLKModelDownloadConditions alloc] initWithAllowsCellularAccess:YES
                                         allowsBackgroundDownloading:YES];

NSProgress *downloadProgress =
    [[MLKModelManager modelManager] downloadModel:remoteModel
                                       conditions:downloadConditions];

Many apps start the download task in their initialization code, but you can do so at any point before you need to use the model.

Configure the image labeler

After you configure your model sources, create an ImageLabeler object from one of them.

The following options are available:

Options
confidenceThreshold

Minimum confidence score of detected labels. If not set, any classifier threshold specified by the model’s metadata will be used. If the model does not contain any metadata or the metadata does not specify a classifier threshold, a default threshold of 0.0 will be used.

maxResultCount

Maximum number of labels to return. If not set, the default value of 10 will be used.

If you only have a locally-bundled model, just create a labeler from your LocalModel object:

Swift

let options = CustomImageLabelerOptions(localModel: localModel)
options.confidenceThreshold = NSNumber(value: 0.0)
let imageLabeler = ImageLabeler.imageLabeler(options: options)

Objective-C

MLKCustomImageLabelerOptions *options =
    [[MLKCustomImageLabelerOptions alloc] initWithLocalModel:localModel];
options.confidenceThreshold = @(0.0);
MLKImageLabeler *imageLabeler =
    [MLKImageLabeler imageLabelerWithOptions:options];

If you have a remotely-hosted model, you will have to check that it has been downloaded before you run it. You can check the status of the model download task using the model manager's isModelDownloaded(remoteModel:) method.

Although you only have to confirm this before running the labeler, if you have both a remotely-hosted model and a locally-bundled model, it might make sense to perform this check when instantiating the ImageLabeler: create a labeler from the remote model if it's been downloaded, and from the local model otherwise.

Swift

var options: CustomImageLabelerOptions!
if (ModelManager.modelManager().isModelDownloaded(remoteModel)) {
  options = CustomImageLabelerOptions(remoteModel: remoteModel)
} else {
  options = CustomImageLabelerOptions(localModel: localModel)
}
options.confidenceThreshold = NSNumber(value: 0.0)
let imageLabeler = ImageLabeler.imageLabeler(options: options)

Objective-C

MLKCustomImageLabelerOptions *options;
if ([[MLKModelManager modelManager] isModelDownloaded:remoteModel]) {
  options = [[MLKCustomImageLabelerOptions alloc] initWithRemoteModel:remoteModel];
} else {
  options = [[MLKCustomImageLabelerOptions alloc] initWithLocalModel:localModel];
}
options.confidenceThreshold = @(0.0);
MLKImageLabeler *imageLabeler =
    [MLKImageLabeler imageLabelerWithOptions:options];

If you only have a remotely-hosted model, you should disable model-related functionality—for example, gray-out or hide part of your UI—until you confirm the model has been downloaded.

You can get the model download status by attaching observers to the default Notification Center. Be sure to use a weak reference to self in the observer block, since downloads can take some time, and the originating object can be freed by the time the download finishes. For example:

Swift

NotificationCenter.default.addObserver(
    forName: .mlkitModelDownloadDidSucceed,
    object: nil,
    queue: nil
) { [weak self] notification in
    guard let strongSelf = self,
        let userInfo = notification.userInfo,
        let model = userInfo[ModelDownloadUserInfoKey.remoteModel.rawValue]
            as? RemoteModel,
        model.name == "your_remote_model"
        else { return }
    // The model was downloaded and is available on the device
}

NotificationCenter.default.addObserver(
    forName: .mlkitModelDownloadDidFail,
    object: nil,
    queue: nil
) { [weak self] notification in
    guard let strongSelf = self,
        let userInfo = notification.userInfo,
        let model = userInfo[ModelDownloadUserInfoKey.remoteModel.rawValue]
            as? RemoteModel
        else { return }
    let error = userInfo[ModelDownloadUserInfoKey.error.rawValue]
    // ...
}

Objective-C

__weak typeof(self) weakSelf = self;

[NSNotificationCenter.defaultCenter
    addObserverForName:MLKModelDownloadDidSucceedNotification
                object:nil
                 queue:nil
            usingBlock:^(NSNotification *_Nonnull note) {
              if (weakSelf == nil | note.userInfo == nil) {
                return;
              }
              __strong typeof(self) strongSelf = weakSelf;

              MLKRemoteModel *model = note.userInfo[MLKModelDownloadUserInfoKeyRemoteModel];
              if ([model.name isEqualToString:@"your_remote_model"]) {
                // The model was downloaded and is available on the device
              }
            }];

[NSNotificationCenter.defaultCenter
    addObserverForName:MLKModelDownloadDidFailNotification
                object:nil
                 queue:nil
            usingBlock:^(NSNotification *_Nonnull note) {
              if (weakSelf == nil | note.userInfo == nil) {
                return;
              }
              __strong typeof(self) strongSelf = weakSelf;

              NSError *error = note.userInfo[MLKModelDownloadUserInfoKeyError];
            }];

2. Prepare the input image

Create a VisionImage object using a UIImage or a CMSampleBuffer.

If you use a UIImage, follow these steps:

  • Create a VisionImage object with the UIImage. Make sure to specify the correct .orientation.

    Swift

    let image = VisionImage(image: UIImage)
    visionImage.orientation = image.imageOrientation

    Objective-C

    MLKVisionImage *visionImage = [[MLKVisionImage alloc] initWithImage:image];
    visionImage.orientation = image.imageOrientation;

If you use a CMSampleBuffer, follow these steps:

  • Specify the orientation of the image data contained in the CMSampleBuffer.

    To get the image orientation:

    Swift

    func imageOrientation(
      deviceOrientation: UIDeviceOrientation,
      cameraPosition: AVCaptureDevice.Position
    ) -> UIImage.Orientation {
      switch deviceOrientation {
      case .portrait:
        return cameraPosition == .front ? .leftMirrored : .right
      case .landscapeLeft:
        return cameraPosition == .front ? .downMirrored : .up
      case .portraitUpsideDown:
        return cameraPosition == .front ? .rightMirrored : .left
      case .landscapeRight:
        return cameraPosition == .front ? .upMirrored : .down
      case .faceDown, .faceUp, .unknown:
        return .up
      }
    }
          

    Objective-C

    - (UIImageOrientation)
      imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation
                             cameraPosition:(AVCaptureDevicePosition)cameraPosition {
      switch (deviceOrientation) {
        case UIDeviceOrientationPortrait:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationLeftMirrored
                                                                : UIImageOrientationRight;
    
        case UIDeviceOrientationLandscapeLeft:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationDownMirrored
                                                                : UIImageOrientationUp;
        case UIDeviceOrientationPortraitUpsideDown:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationRightMirrored
                                                                : UIImageOrientationLeft;
        case UIDeviceOrientationLandscapeRight:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationUpMirrored
                                                                : UIImageOrientationDown;
        case UIDeviceOrientationUnknown:
        case UIDeviceOrientationFaceUp:
        case UIDeviceOrientationFaceDown:
          return UIImageOrientationUp;
      }
    }
          
  • Create a VisionImage object using the CMSampleBuffer object and orientation:

    Swift

    let image = VisionImage(buffer: sampleBuffer)
    image.orientation = imageOrientation(
      deviceOrientation: UIDevice.current.orientation,
      cameraPosition: cameraPosition)

    Objective-C

     MLKVisionImage *image = [[MLKVisionImage alloc] initWithBuffer:sampleBuffer];
     image.orientation =
       [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation
                                    cameraPosition:cameraPosition];

3. Run the image labeler

To label objects in an image, pass the image object to the ImageLabeler's process() method.

Asynchronously:

Swift

imageLabeler.process(image) { labels, error in
    guard error == nil, let labels = labels, !labels.isEmpty else {
        // Handle the error.
        return
    }
    // Show results.
}

Objective-C

[imageLabeler
    processImage:image
      completion:^(NSArray *_Nullable labels,
                   NSError *_Nullable error) {
        if (label.count == 0) {
            // Handle the error.
            return;
        }
        // Show results.
     }];

Synchronously:

Swift

var labels: [ImageLabel]
do {
    labels = try imageLabeler.results(in: image)
} catch let error {
    // Handle the error.
    return
}
// Show results.

Objective-C

NSError *error;
NSArray *labels =
    [imageLabeler resultsInImage:image error:&error];
// Show results or handle the error.

4. Get information about labeled entities

If the image labeling operation succeeds, it returns an array of ImageLabel. Each ImageLabel represents something that was labeled in the image. You can get each label's text description (if available in the metadata of the TensorFlow Lite model file), confidence score, and index. For example:

Swift

for label in labels {
  let labelText = label.text
  let confidence = label.confidence
  let index = label.index
}

Objective-C

for (MLKImageLabel *label in labels) {
  NSString *labelText = label.text;
  float confidence = label.confidence;
  NSInteger index = label.index;
}

Tips to improve real-time performance

If you want to label images in a real-time application, follow these guidelines to achieve the best framerates: