借助机器学习套件的数字墨水识别,您可以识别以数百种语言显示在数字表面上的文字,以及对草图进行分类。
试试看
- 试用示例应用,查看此 API 的示例用法。
准备工作
在 Podfile 中添加以下机器学习套件库:
pod 'GoogleMLKit/DigitalInkRecognition', '3.2.0'
安装或更新项目的 Pod 之后,请使用 Xcode 项目的
.xcworkspace
打开该项目。Xcode 13.2.1 版或更高版本支持机器学习套件。
您现在可以开始识别 Ink
对象中的文本。
构建 Ink
对象
构建 Ink
对象的主要方法是在触摸屏上绘制对象。在 iOS 上,您可以将 UIImageView 与触摸事件处理脚本搭配使用,从而在屏幕上绘制描边,并存储描边的点以构建 Ink
对象。以下代码段演示了此常规模式。如需查看更完整的示例,请参阅快速入门应用,该示例将触摸事件处理、屏幕绘制和描边数据管理分开。
Swift
@IBOutlet weak var mainImageView: UIImageView! var kMillisecondsPerTimeInterval = 1000.0 var lastPoint = CGPoint.zero private var strokes: [Stroke] = [] private var points: [StrokePoint] = [] func drawLine(from fromPoint: CGPoint, to toPoint: CGPoint) { UIGraphicsBeginImageContext(view.frame.size) guard let context = UIGraphicsGetCurrentContext() else { return } mainImageView.image?.draw(in: view.bounds) context.move(to: fromPoint) context.addLine(to: toPoint) context.setLineCap(.round) context.setBlendMode(.normal) context.setLineWidth(10.0) context.setStrokeColor(UIColor.white.cgColor) context.strokePath() mainImageView.image = UIGraphicsGetImageFromCurrentImageContext() mainImageView.alpha = 1.0 UIGraphicsEndImageContext() } override func touchesBegan(_ touches: Set, with event: UIEvent?) { guard let touch = touches.first else { return } lastPoint = touch.location(in: mainImageView) let t = touch.timestamp points = [StrokePoint.init(x: Float(lastPoint.x), y: Float(lastPoint.y), t: Int(t * kMillisecondsPerTimeInterval))] drawLine(from:lastPoint, to:lastPoint) } override func touchesMoved(_ touches: Set , with event: UIEvent?) { guard let touch = touches.first else { return } let currentPoint = touch.location(in: mainImageView) let t = touch.timestamp points.append(StrokePoint.init(x: Float(currentPoint.x), y: Float(currentPoint.y), t: Int(t * kMillisecondsPerTimeInterval))) drawLine(from: lastPoint, to: currentPoint) lastPoint = currentPoint } override func touchesEnded(_ touches: Set , with event: UIEvent?) { guard let touch = touches.first else { return } let currentPoint = touch.location(in: mainImageView) let t = touch.timestamp points.append(StrokePoint.init(x: Float(currentPoint.x), y: Float(currentPoint.y), t: Int(t * kMillisecondsPerTimeInterval))) drawLine(from: lastPoint, to: currentPoint) lastPoint = currentPoint strokes.append(Stroke.init(points: points)) self.points = [] doRecognition() }
Objective-C
// Interface @property (weak, nonatomic) IBOutlet UIImageView *mainImageView; @property(nonatomic) CGPoint lastPoint; @property(nonatomic) NSMutableArray*strokes; @property(nonatomic) NSMutableArray *points; // Implementations static const double kMillisecondsPerTimeInterval = 1000.0; - (void)drawLineFrom:(CGPoint)fromPoint to:(CGPoint)toPoint { UIGraphicsBeginImageContext(self.mainImageView.frame.size); [self.mainImageView.image drawInRect:CGRectMake(0, 0, self.mainImageView.frame.size.width, self.mainImageView.frame.size.height)]; CGContextMoveToPoint(UIGraphicsGetCurrentContext(), fromPoint.x, fromPoint.y); CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), toPoint.x, toPoint.y); CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound); CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 10.0); CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1, 1, 1, 1); CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeNormal); CGContextStrokePath(UIGraphicsGetCurrentContext()); CGContextFlush(UIGraphicsGetCurrentContext()); self.mainImageView.image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); } - (void)touchesBegan:(NSSet *)touches withEvent:(nullable UIEvent *)event { UITouch *touch = [touches anyObject]; self.lastPoint = [touch locationInView:self.mainImageView]; NSTimeInterval time = [touch timestamp]; self.points = [NSMutableArray array]; [self.points addObject:[[MLKStrokePoint alloc] initWithX:self.lastPoint.x y:self.lastPoint.y t:time * kMillisecondsPerTimeInterval]]; [self drawLineFrom:self.lastPoint to:self.lastPoint]; } - (void)touchesMoved:(NSSet *)touches withEvent:(nullable UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint currentPoint = [touch locationInView:self.mainImageView]; NSTimeInterval time = [touch timestamp]; [self.points addObject:[[MLKStrokePoint alloc] initWithX:currentPoint.x y:currentPoint.y t:time * kMillisecondsPerTimeInterval]]; [self drawLineFrom:self.lastPoint to:currentPoint]; self.lastPoint = currentPoint; } - (void)touchesEnded:(NSSet *)touches withEvent:(nullable UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint currentPoint = [touch locationInView:self.mainImageView]; NSTimeInterval time = [touch timestamp]; [self.points addObject:[[MLKStrokePoint alloc] initWithX:currentPoint.x y:currentPoint.y t:time * kMillisecondsPerTimeInterval]]; [self drawLineFrom:self.lastPoint to:currentPoint]; self.lastPoint = currentPoint; if (self.strokes == nil) { self.strokes = [NSMutableArray array]; } [self.strokes addObject:[[MLKStroke alloc] initWithPoints:self.points]]; self.points = nil; [self doRecognition]; }
请注意,该代码段包含用于在 UIImageView 中绘制描边的示例函数,您应根据需要对其进行调整。我们建议您在绘制线段时使用圆顶,以便将长度为零的线段绘制为一个点(可以考虑使用小写字母 i 的点)。每次描边写好后,系统会调用 doRecognition()
函数,定义如下。
获取 DigitalInkRecognizer
的实例
为了执行识别,我们需要将 Ink
对象传递给 DigitalInkRecognizer
实例。如需获取 DigitalInkRecognizer
实例,我们首先需要下载用于识别所需语言的识别器模型,然后将该模型加载到 RAM 中。为此,可以使用以下代码段,为简单起见,将其放在 viewDidLoad()
方法中,并使用硬编码语言名称。如需查看示例,了解如何向用户显示可用语言列表并下载所选语言,请参阅快速入门应用。
Swift
override func viewDidLoad() { super.viewDidLoad() let languageTag = "en-US" let identifier = DigitalInkRecognitionModelIdentifier(forLanguageTag: languageTag) if identifier == nil { // no model was found or the language tag couldn't be parsed, handle error. } let model = DigitalInkRecognitionModel.init(modelIdentifier: identifier!) let modelManager = ModelManager.modelManager() let conditions = ModelDownloadConditions.init(allowsCellularAccess: true, allowsBackgroundDownloading: true) modelManager.download(model, conditions: conditions) // Get a recognizer for the language let options: DigitalInkRecognizerOptions = DigitalInkRecognizerOptions.init(model: model) recognizer = DigitalInkRecognizer.digitalInkRecognizer(options: options) }
Objective-C
- (void)viewDidLoad { [super viewDidLoad]; NSString *languagetag = @"en-US"; MLKDigitalInkRecognitionModelIdentifier *identifier = [MLKDigitalInkRecognitionModelIdentifier modelIdentifierForLanguageTag:languagetag]; if (identifier == nil) { // no model was found or the language tag couldn't be parsed, handle error. } MLKDigitalInkRecognitionModel *model = [[MLKDigitalInkRecognitionModel alloc] initWithModelIdentifier:identifier]; MLKModelManager *modelManager = [MLKModelManager modelManager]; [modelManager downloadModel:model conditions:[[MLKModelDownloadConditions alloc] initWithAllowsCellularAccess:YES allowsBackgroundDownloading:YES]]; MLKDigitalInkRecognizerOptions *options = [[MLKDigitalInkRecognizerOptions alloc] initWithModel:model]; self.recognizer = [MLKDigitalInkRecognizer digitalInkRecognizerWithOptions:options]; }
该快速入门应用包含一些额外的代码,展示了如何同时处理多项下载,以及如何通过处理完成通知来确定哪个下载成功。
识别 Ink
对象
接下来是 doRecognition()
函数,为简单起见,我们调用了 touchesEnded()
函数。在其他应用中,用户可能只想在超时后或用户按下按钮触发识别时调用识别。
Swift
func doRecognition() { let ink = Ink.init(strokes: strokes) recognizer.recognize( ink: ink, completion: { [unowned self] (result: DigitalInkRecognitionResult?, error: Error?) in var alertTitle = "" var alertText = "" if let result = result, let candidate = result.candidates.first { alertTitle = "I recognized this:" alertText = candidate.text } else { alertTitle = "I hit an error:" alertText = error!.localizedDescription } let alert = UIAlertController(title: alertTitle, message: alertText, preferredStyle: UIAlertController.Style.alert) alert.addAction(UIAlertAction(title: "OK", style: UIAlertAction.Style.default, handler: nil)) self.present(alert, animated: true, completion: nil) } ) }
Objective-C
- (void)doRecognition { MLKInk *ink = [[MLKInk alloc] initWithStrokes:self.strokes]; __weak typeof(self) weakSelf = self; [self.recognizer recognizeInk:ink completion:^(MLKDigitalInkRecognitionResult *_Nullable result, NSError *_Nullable error) { typeof(weakSelf) strongSelf = weakSelf; if (strongSelf == nil) { return; } NSString *alertTitle = nil; NSString *alertText = nil; if (result.candidates.count > 0) { alertTitle = @"I recognized this:"; alertText = result.candidates[0].text; } else { alertTitle = @"I hit an error:"; alertText = [error localizedDescription]; } UIAlertController *alert = [UIAlertController alertControllerWithTitle:alertTitle message:alertText preferredStyle:UIAlertControllerStyleAlert]; [alert addAction:[UIAlertAction actionWithTitle:@"OK" style:UIAlertActionStyleDefault handler:nil]]; [strongSelf presentViewController:alert animated:YES completion:nil]; }]; }
管理模型下载
我们已经了解如何下载识别模型。以下代码段说明如何检查模型是否已下载,或者当不再需要模型时如何删除存储空间。
检查模型是否已下载
Swift
let model : DigitalInkRecognitionModel = ... let modelManager = ModelManager.modelManager() modelManager.isModelDownloaded(model)
Objective-C
MLKDigitalInkRecognitionModel *model = ...; MLKModelManager *modelManager = [MLKModelManager modelManager]; [modelManager isModelDownloaded:model];
删除已下载的模型
Swift
let model : DigitalInkRecognitionModel = ... let modelManager = ModelManager.modelManager() if modelManager.isModelDownloaded(model) { modelManager.deleteDownloadedModel( model!, completion: { error in if error != nil { // Handle error return } NSLog(@"Model deleted."); }) }
Objective-C
MLKDigitalInkRecognitionModel *model = ...; MLKModelManager *modelManager = [MLKModelManager modelManager]; if ([self.modelManager isModelDownloaded:model]) { [self.modelManager deleteDownloadedModel:model completion:^(NSError *_Nullable error) { if (error) { // Handle error. return; } NSLog(@"Model deleted."); }]; }
提高文本识别准确率的提示
文字识别的准确性可能会因语言而异。准确性还取决于写作风格虽然数字墨水识别经过训练可以处理多种书写样式,但结果会因用户而异。
以下是一些提高文本识别器准确性的方法。请注意,这些技术不适用于表情符号、自动绘图和形状的绘制分类器。
书写区
许多应用都具有明确定义的用户输入区域。符号的含义在一定程度上取决于其大小相对于包含它的写入区域的大小。例如,小写或大写字母“o”或“c”的区别,以及逗号与正斜杠。
告知识别器手写区域的宽度和高度可提高准确性。不过,识别器会假定写入区域仅包含一行文本。如果物理书写区域足够大,足以支持用户写两行或更多行,您可以传入 WriteArea,并将高度设为最准确的单行文本高度估算值,以获得更出色的效果。您传递给识别器的 WriterArea 对象不必与屏幕上的物理写入区域完全一致。在某些语言中,以这种方式更改 WriteArea 高度的效果更胜一筹。
指定书写区域时,请指定其宽度和高度,使用与描边坐标相同的单位指定。x,y 坐标参数没有单位要求 - API 会对所有单位进行归一化,因此重要的是描边的相对大小和位置。您可以随意传入适合您系统的任何缩放比例。
背景信息
上下文是您尝试识别的 Ink
中描边之前的文本,您可以告知识别程序前置上下文,从而对其进行帮助。
例如,手写字母“n”和“u”经常被误认为是彼此。如果用户已经输入了部分字词“arg”,他们可能会继续进行可以识别为“ument”或“nment”的描边。指定上下文前参数“arg”可解决歧义,因为“ arguments”一词更有可能出现“argnment”。
前置上下文还可以帮助识别器识别字词拆分,即字词之间的空格。您可以输入空格字符,但不能绘制字符。因此,识别器如何确定一个单词的结束时间和下一个单词的开始时间?如果用户已编写“hello”并继续写入的“world”(没有预先上下文),识别器会返回字符串“world”。但是,如果您指定前置上下文“hello”,则模型会返回字符串“world”,并带有前导空格,因为“hello world”比“helloword”更有意义。
您应该提供最长的上下文前缀字符串,最多 20 个字符(包括空格)。如果字符串较长,则识别器仅使用最后 20 个字符。
以下代码示例展示了如何定义写入区域并使用 RecognitionContext
对象指定前置上下文。
Swift
let ink: Ink = ...; let recognizer: DigitalInkRecognizer = ...; let preContext: String = ...; let writingArea = WritingArea.init(width: ..., height: ...); let context: DigitalInkRecognitionContext.init( preContext: preContext, writingArea: writingArea); recognizer.recognizeHandwriting( from: ink, context: context, completion: { (result: DigitalInkRecognitionResult?, error: Error?) in if let result = result, let candidate = result.candidates.first { NSLog("Recognized \(candidate.text)") } else { NSLog("Recognition error \(error)") } })
Objective-C
MLKInk *ink = ...; MLKDigitalInkRecognizer *recognizer = ...; NSString *preContext = ...; MLKWritingArea *writingArea = [MLKWritingArea initWithWidth:... height:...]; MLKDigitalInkRecognitionContext *context = [MLKDigitalInkRecognitionContext initWithPreContext:preContext writingArea:writingArea]; [recognizer recognizeHandwritingFromInk:ink context:context completion:^(MLKDigitalInkRecognitionResult *_Nullable result, NSError *_Nullable error) { NSLog(@"Recognition result %@", result.candidates[0].text); }];
描边排序
识别的准确性与笔画的顺序密切相关。识别程序预计会按照自然书写的顺序(例如从左到右表示英语)进行描边。不符合此模式的任何情况(例如从最后一个单词开始编写英文句子)的结果都不太准确。
再举一个例子,当 Ink
中间的一个单词被移除并被另一个单词替换时,修订版本可能正好位于句子的中间,但修订版本的笔画位于笔画序列的末尾。在这种情况下,我们建议您将新写入的字词单独发送到 API,并使用您自己的逻辑将结果与之前的识别结果合并。
处理不明确的形状
在某些情况下,提供给识别器的形状的含义不明确。例如,边缘非常圆的矩形可以视为矩形或椭圆形。
这些不明确的案例可以通过使用识别分数(如果有)来处理。只有形状分类器提供得分。如果模型很有信心,则最高结果的分数将远高于第二优的分数。如果不确定,前两个结果的得分将很接近。另请注意,形状分类器会将整个 Ink
解释为一个形状。例如,如果 Ink
包含一个矩形和一个相邻的椭圆,则识别器可能会返回其中一个(或完全不同的内容),因为单个识别候选对象无法表示两个形状。