MediaPipeTasksText Framework Reference

Classes

The following classes are available globally.

  • Holds the base options that is used for creation of any type of task. It has fields with important information acceleration configuration, TFLite model source etc.

    Declaration

    Objective-C

    
    @interface MPPBaseOptions : NSObject <NSCopying>
  • Category is a util class that contains a label, its display name, a float value as score, and the index of the label in the corresponding label file. Typically it’s used as the result of classification tasks.

    Declaration

    Objective-C

    
    @interface MPPCategory : NSObject
  • Represents the list of classification for a given classifier head. Typically used as a result for classification tasks.

    Declaration

    Objective-C

    
    @interface MPPClassifications : NSObject
  • Represents the classification results of a model. Typically used as a result for classification tasks.

    Declaration

    Objective-C

    
    @interface MPPClassificationResult : NSObject
  • Represents the embedding for a given embedder head. Typically used in embedding tasks.

    One and only one of the two ‘floatEmbedding’ and ‘quantizedEmbedding’ will contain data, based on whether or not the embedder was configured to perform scala quantization.

    Declaration

    Objective-C

    
    @interface MPPEmbedding : NSObject
  • Represents the embedding results of a model. Typically used as a result for embedding tasks.

    Declaration

    Objective-C

    
    @interface MPPEmbeddingResult : NSObject
  • @brief Predicts the language of an input text.

    This API expects a TFLite model with TFLite Model Metadatathat contains the mandatory (described below) input tensor, output tensor, and the language codes in an AssociatedFile.

    Metadata is required for models with int32 input tensors because it contains the input process unit for the model’s Tokenizer. No metadata is required for models with string input tensors.

    Input tensor

    • One input tensor (kTfLiteString) of shape [1] containing the input string.

    Output tensor

    • One output tensor (kTfLiteFloat32) of shape [1 x N] where N is the number of languages.

    Declaration

    Objective-C

    
    @interface MPPLanguageDetector : NSObject
  • Options for setting up a LanguageDetector.

    Declaration

    Objective-C

    
    @interface MPPLanguageDetectorOptions : MPPTaskOptions <NSCopying>
  • Undocumented

    Declaration

    Objective-C

    @interface MPPLanguagePrediction : NSObject
    
    /** The i18n language / locale code for the prediction. */
    @property(nonatomic, readonly) NSString *languageCode;
    
    /** The probability for the prediction. */
    @property(nonatomic, readonly) float probability;
    
    /**
     * Initializes a new `LanguagePrediction` with the given language code and probability.
     *
     * @param languageCode The i18n language / locale code for the prediction.
     * @param probability The probability for the prediction.
     *
     * @return An instance of `LanguagePrediction` initialized with the given language code and
     * probability.
     */
    - (instancetype)initWithLanguageCode:(NSString *)languageCode probability:(float)probability;
    
    @end
  • Represents the results generated by LanguageDetector. *

    Declaration

    Objective-C

    
    @interface MPPLanguageDetectorResult : MPPTaskResult
  • MediaPipe Tasks options base class. Any MediaPipe task-specific options class should extend this class.

    Declaration

    Objective-C

    
    @interface MPPTaskOptions : NSObject <NSCopying>
  • MediaPipe Tasks result base class. Any MediaPipe task result class should extend this class.

    Declaration

    Objective-C

    
    @interface MPPTaskResult : NSObject <NSCopying>
  • @brief Performs classification on text.

    This API expects a TFLite model with (optional) TFLite Model Metadatathat contains the mandatory (described below) input tensors, output tensor, and the optional (but recommended) label items as AssociatedFiles with type TENSOR_AXIS_LABELS per output classification tensor.

    Metadata is required for models with int32 input tensors because it contains the input process unit for the model’s Tokenizer. No metadata is required for models with string input tensors.

    Input tensors

    • Three input tensors kTfLiteInt32 of shape [batch_size xbert_max_seq_len] representing the input ids, mask ids, and segment ids. This input signature requires a Bert Tokenizer process unit in the model metadata.
    • Or one input tensor kTfLiteInt32 of shape [batch_size xmax_seq_len] representing the input ids. This input signature requires a Regex Tokenizer process unit in the model metadata.
    • Or one input tensor (kTfLiteString) that is shapeless or has shape [1] containing the input string.

    At least one output tensor (kTfLiteFloat32/kBool) with:

    • N classes and shape [1 x N]
    • optional (but recommended) label map(s) as AssociatedFiles with type TENSOR_AXIS_LABELS, containing one label per line. The first such AssociatedFile (if any) is used to fill the categoryName field of the results. The displayName field is filled from the AssociatedFile (if any) whose locale matches the displayNamesLocale field of the MPPTextClassifierOptions used at creation time (“en” by default, i.e. English). If none of these are available, only the index field of the results will be filled.

    Declaration

    Objective-C

    
    @interface MPPTextClassifier : NSObject
  • Options for setting up a MPPTextClassifier.

    Declaration

    Objective-C

    
    @interface MPPTextClassifierOptions : MPPTaskOptions <NSCopying>
  • Represents the classification results generated by MPPTextClassifier. *

    Declaration

    Objective-C

    
    @interface MPPTextClassifierResult : MPPTaskResult
  • @brief Performs embedding extraction on text.

    This API expects a TFLite model with (optional) TFLite Model Metadata.

    Metadata is required for models with int32 input tensors because it contains the input process unit for the model’s Tokenizer. No metadata is required for models with string input tensors.

    Input tensors:

    • Three input tensors kTfLiteInt32 of shape [batch_size x bert_max_seq_len] representing the input ids, mask ids, and segment ids. This input signature requires a Bert Tokenizer process unit in the model metadata.
    • Or one input tensor kTfLiteInt32 of shape [batch_size x max_seq_len] representing the input ids. This input signature requires a Regex Tokenizer process unit in the model metadata.
    • Or one input tensor (kTfLiteString) that is shapeless or has shape [1] containing the input string.

    At least one output tensor (kTfLiteFloat32/kTfLiteUint8) with shape [1 x N] where N is the number of dimensions in the produced embeddings.

    Declaration

    Objective-C

    
    @interface MPPTextEmbedder : NSObject
  • Options for setting up a MPPTextEmbedder.

    Declaration

    Objective-C

    
    @interface MPPTextEmbedderOptions : MPPTaskOptions <NSCopying>
  • Represents the embedding results generated by MPPTextEmbedder. *

    Declaration

    Objective-C

    
    @interface MPPTextEmbedderResult : MPPTaskResult