在 iOS 上使用 ML Kit 辨識圖片中的文字

您可以使用 ML Kit 辨識圖片或影片中的文字,例如 路牌這項功能的主要特性如下:

文字辨識 v2 API
說明辨識圖片或影片中的文字,支援 拉丁文、中文、梵文、日文和韓文文字等 支援多種語言
SDK 名稱GoogleMLKit/TextRecognition
GoogleMLKit/TextRecognitionChinese
GoogleMLKit/TextRecognitionDevanagari
GoogleMLKit/TextRecognitionJapanese
GoogleMLKit/TextRecognitionKorean
導入作業在建構期間,資產會以靜態方式連結至應用程式
應用程式大小影響每個指令碼 SDK 約 38 MB
成效多數裝置都能即時支援 Latin Script SDK,其他裝置則顯示的速度較慢。

立即試用

事前準備

  1. 在 Podfile 中加入下列 ML Kit Pod:
    # To recognize Latin script
    pod 'GoogleMLKit/TextRecognition', '3.2.0'
    # To recognize Chinese script
    pod 'GoogleMLKit/TextRecognitionChinese', '3.2.0'
    # To recognize Devanagari script
    pod 'GoogleMLKit/TextRecognitionDevanagari', '3.2.0'
    # To recognize Japanese script
    pod 'GoogleMLKit/TextRecognitionJapanese', '3.2.0'
    # To recognize Korean script
    pod 'GoogleMLKit/TextRecognitionKorean', '3.2.0'
    
  2. 安裝或更新專案的 Pod 後,請使用 .xcworkspace。Xcode 12.4 以上版本支援 ML Kit。

1. 建立「TextRecognizer」的執行個體

透過呼叫 TextRecognizer 建立執行個體 +textRecognizer(options:),傳遞與您宣告為 依附元件:

Swift

// When using Latin script recognition SDK
let latinOptions = TextRecognizerOptions()
let latinTextRecognizer = TextRecognizer.textRecognizer(options:options)

// When using Chinese script recognition SDK
let chineseOptions = ChineseTextRecognizerOptions()
let chineseTextRecognizer = TextRecognizer.textRecognizer(options:options)

// When using Devanagari script recognition SDK
let devanagariOptions = DevanagariTextRecognizerOptions()
let devanagariTextRecognizer = TextRecognizer.textRecognizer(options:options)

// When using Japanese script recognition SDK
let japaneseOptions = JapaneseTextRecognizerOptions()
let japaneseTextRecognizer = TextRecognizer.textRecognizer(options:options)

// When using Korean script recognition SDK
let koreanOptions = KoreanTextRecognizerOptions()
let koreanTextRecognizer = TextRecognizer.textRecognizer(options:options)

Objective-C

// When using Latin script recognition SDK
MLKTextRecognizerOptions *latinOptions = [[MLKTextRecognizerOptions alloc] init];
MLKTextRecognizer *latinTextRecognizer = [MLKTextRecognizer textRecognizerWithOptions:options];

// When using Chinese script recognition SDK
MLKChineseTextRecognizerOptions *chineseOptions = [[MLKChineseTextRecognizerOptions alloc] init];
MLKTextRecognizer *chineseTextRecognizer = [MLKTextRecognizer textRecognizerWithOptions:options];

// When using Devanagari script recognition SDK
MLKDevanagariTextRecognizerOptions *devanagariOptions = [[MLKDevanagariTextRecognizerOptions alloc] init];
MLKTextRecognizer *devanagariTextRecognizer = [MLKTextRecognizer textRecognizerWithOptions:options];

// When using Japanese script recognition SDK
MLKJapaneseTextRecognizerOptions *japaneseOptions = [[MLKJapaneseTextRecognizerOptions alloc] init];
MLKTextRecognizer *japaneseTextRecognizer = [MLKTextRecognizer textRecognizerWithOptions:options];

// When using Korean script recognition SDK
MLKKoreanTextRecognizerOptions *koreanOptions = [[MLKKoreanTextRecognizerOptions alloc] init];
MLKTextRecognizer *koreanTextRecognizer = [MLKTextRecognizer textRecognizerWithOptions:options];

2. 準備輸入圖片

將圖片做為 UIImageCMSampleBufferRef 傳遞至 TextRecognizerprocess(_:completion:) 方法:

使用 UIImageVisionImage CMSampleBuffer

如果您使用 UIImage,請按照下列步驟操作:

  • 使用 UIImage 建立 VisionImage 物件。請務必指定正確的 .orientation

    Swift

    let image = VisionImage(image: UIImage)
    visionImage.orientation = image.imageOrientation

    Objective-C

    MLKVisionImage *visionImage = [[MLKVisionImage alloc] initWithImage:image];
    visionImage.orientation = image.imageOrientation;

如果您使用 CMSampleBuffer,請按照下列步驟操作:

  • 指定 CMSampleBuffer

    如何取得圖片方向:

    Swift

    func imageOrientation(
      deviceOrientation: UIDeviceOrientation,
      cameraPosition: AVCaptureDevice.Position
    ) -> UIImage.Orientation {
      switch deviceOrientation {
      case .portrait:
        return cameraPosition == .front ? .leftMirrored : .right
      case .landscapeLeft:
        return cameraPosition == .front ? .downMirrored : .up
      case .portraitUpsideDown:
        return cameraPosition == .front ? .rightMirrored : .left
      case .landscapeRight:
        return cameraPosition == .front ? .upMirrored : .down
      case .faceDown, .faceUp, .unknown:
        return .up
      }
    }
          

    Objective-C

    - (UIImageOrientation)
      imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation
                             cameraPosition:(AVCaptureDevicePosition)cameraPosition {
      switch (deviceOrientation) {
        case UIDeviceOrientationPortrait:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationLeftMirrored
                                                                : UIImageOrientationRight;
    
        case UIDeviceOrientationLandscapeLeft:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationDownMirrored
                                                                : UIImageOrientationUp;
        case UIDeviceOrientationPortraitUpsideDown:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationRightMirrored
                                                                : UIImageOrientationLeft;
        case UIDeviceOrientationLandscapeRight:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationUpMirrored
                                                                : UIImageOrientationDown;
        case UIDeviceOrientationUnknown:
        case UIDeviceOrientationFaceUp:
        case UIDeviceOrientationFaceDown:
          return UIImageOrientationUp;
      }
    }
          
  • 使用VisionImage CMSampleBuffer 物件和方向:

    Swift

    let image = VisionImage(buffer: sampleBuffer)
    image.orientation = imageOrientation(
      deviceOrientation: UIDevice.current.orientation,
      cameraPosition: cameraPosition)

    Objective-C

     MLKVisionImage *image = [[MLKVisionImage alloc] initWithBuffer:sampleBuffer];
     image.orientation =
       [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation
                                    cameraPosition:cameraPosition];

3. 處理圖片

接著,將圖片傳遞至 process(_:completion:) 方法:

Swift

textRecognizer.process(visionImage) { result, error in
  guard error == nil, let result = result else {
    // Error handling
    return
  }
  // Recognized text
}

Objective-C

[textRecognizer processImage:image
                  completion:^(MLKText *_Nullable result,
                               NSError *_Nullable error) {
  if (error != nil || result == nil) {
    // Error handling
    return;
  }
  // Recognized text
}];

4. 從已辨識的文字區塊擷取文字

如果文字辨識作業成功,則會傳回 Text 物件。Text 物件包含全文 已辨識在圖中,且為零或多個 TextBlock 如需儲存大量結構化物件 建議使用 Cloud Bigtable

每個 TextBlock 都代表矩形文字區塊, 包含零個或多個 TextLine 物件。每TextLine 物件包含零個或多個 TextElement 物件 這個類別是字詞和類似文字的實體,例如日期和數字。

針對每個 TextBlockTextLineTextElement 物件,您可以在 區域的邊界座標。

例如:

Swift

let resultText = result.text
for block in result.blocks {
    let blockText = block.text
    let blockLanguages = block.recognizedLanguages
    let blockCornerPoints = block.cornerPoints
    let blockFrame = block.frame
    for line in block.lines {
        let lineText = line.text
        let lineLanguages = line.recognizedLanguages
        let lineCornerPoints = line.cornerPoints
        let lineFrame = line.frame
        for element in line.elements {
            let elementText = element.text
            let elementCornerPoints = element.cornerPoints
            let elementFrame = element.frame
        }
    }
}

Objective-C

NSString *resultText = result.text;
for (MLKTextBlock *block in result.blocks) {
  NSString *blockText = block.text;
  NSArray<MLKTextRecognizedLanguage *> *blockLanguages = block.recognizedLanguages;
  NSArray<NSValue *> *blockCornerPoints = block.cornerPoints;
  CGRect blockFrame = block.frame;
  for (MLKTextLine *line in block.lines) {
    NSString *lineText = line.text;
    NSArray<MLKTextRecognizedLanguage *> *lineLanguages = line.recognizedLanguages;
    NSArray<NSValue *> *lineCornerPoints = line.cornerPoints;
    CGRect lineFrame = line.frame;
    for (MLKTextElement *element in line.elements) {
      NSString *elementText = element.text;
      NSArray<NSValue *> *elementCornerPoints = element.cornerPoints;
      CGRect elementFrame = element.frame;
    }
  }
}

輸入圖片規範

  • 為了讓 ML Kit 準確辨識文字,輸入圖片必須包含 以充足的像素資料表示的文字理想情況下 每個字元至少要有 16x16 像素一般來說 對字元大於 24x24 像素的特性來說,準確性的優勢在於。

    舉例來說,640x480 的圖片適合掃描名片 圖片會佔滿圖片的整個寬度如何掃描列印的文件 則建議使用 720x1280 像素的圖片。

  • 圖片對焦品質不佳可能會影響文字辨識的準確度。如果您不 請嘗試重新擷取圖片。

  • 如果您在即時應用程式中辨識文字,應該 考量輸入圖片的整體尺寸較小 也能加快處理速度如要縮短延遲時間,請確保文字會盡量佔滿 盡可能擷取圖片,並以較低解析度拍攝圖片 (提醒您, 規定)。若需更多資訊,請參閲 提升成效的訣竅

提升成效的訣竅

  • 如要處理影片影格,請使用偵測工具的 results(in:) 同步 API。致電 透過 AVCaptureVideoDataOutputSampleBufferDelegate captureOutput(_, didOutput:from:) 函式,以同步方式取得指定影片的結果 相框。保留 AVCaptureVideoDataOutput alwaysDiscardsLateVideoFrames 做為 true,以限制對偵測工具的呼叫。如果是 影格的畫面,就會遭到捨棄。
  • 如果使用偵測工具的輸出內容將圖像重疊 先從 ML Kit 取得結果,然後算繪圖片 並疊加單一步驟這麼一來,您的應用程式就會算繪到顯示途徑 每個處理的輸入影格只會產生一次請參閱 updatePreviewOverlayViewWithLastFrame 也可以查看一個範例
  • 建議以較低的解析度拍攝圖片。請特別注意 這個 API 的圖片尺寸規定
  • 為避免可能降低效能,請勿重複執行多個 TextRecognizer 執行個體具有不同指令碼選項。