使用机器学习套件检测姿势 (Android)

机器学习套件提供了两个经过优化的 SDK,用于姿势检测。

SDK 名称姿势检测姿势检测准确
实现代码和资源会在构建时静态关联到您的应用。代码和资源会在构建时静态关联到您的应用。
对应用大小的影响(包括代码和资源)约 10.1 MB约 13.3 MB
性能Pixel 3XL:约 30 FPSPixel 3XL:使用 CPU 时约为 23 FPS,使用 GPU 时约 30 FPS

试试看

准备工作

  1. 请务必在您的项目级 build.gradle 文件中的 buildscriptallprojects 部分添加 Google 的 Maven 制品库。
  2. 将 Android 版机器学习套件库的依赖项添加到模块的应用级 Gradle 文件(通常为 app/build.gradle):

    dependencies {
      // If you want to use the base sdk
      implementation 'com.google.mlkit:pose-detection:18.0.0-beta4'
      // If you want to use the accurate sdk
      implementation 'com.google.mlkit:pose-detection-accurate:18.0.0-beta4'
    }
    

1. 创建 PoseDetector 实例

PoseDetector 个选项

如需检测图片中的姿势,请先创建一个 PoseDetector 实例,并视需要指定检测器设置。

检测模式

PoseDetector 以两种检测模式运行。请务必选择与您的使用场景相符的方案。

STREAM_MODE(默认)
姿势检测器将首先检测图片中最显眼的人物,然后运行姿势检测。在后续帧中,除非人物被遮挡或不再以高置信度检测到,否则不会执行人物检测步骤。姿势检测器将尝试跟踪最突出的人,并在每次推断中返回他们的姿势。这样可以缩短延迟时间,使检测更简单。如果要在视频流中检测姿势,请使用此模式。
SINGLE_IMAGE_MODE
姿势检测器将检测人,然后运行姿势检测。系统会针对每张图片运行人物检测步骤,因此延迟会更长,并且无法进行人员跟踪。在静态图片上使用姿势检测或不需要跟踪时,请使用此模式。

硬件配置

PoseDetector 支持多种用于优化性能的硬件配置:

  • CPU:仅使用 CPU 运行检测器
  • CPU_GPU:使用 CPU 和 GPU 运行检测器

构建检测器选项时,您可以使用 API setPreferredHardwareConfigs 来控制硬件选择。默认情况下,所有硬件配置都设置为首选配置。

机器学习套件将考虑每个配置的可用性、稳定性、正确性和延迟时间,并从首选配置中选择最合适的配置。如果首选配置都不适用,系统会自动将 CPU 配置用作回退。在启用任何加速之前,机器学习套件会以非阻塞方式执行这些检查和相关准备,因此当用户首次运行检测器时,它很可能会使用 CPU。完成所有准备后,系统将在接下来的运行中使用最佳配置。

setPreferredHardwareConfigs 的用法示例:

  • 若要让机器学习套件选择最佳配置,请勿调用此 API。
  • 如果您不想启用任何加速,请仅传入 CPU
  • 如果您希望在 GPU 速度可能较慢的情况下使用 GPU 分流 CPU,请仅传入 CPU_GPU

指定姿势检测器选项:

Kotlin

// Base pose detector with streaming frames, when depending on the pose-detection sdk
val options = PoseDetectorOptions.Builder()
    .setDetectorMode(PoseDetectorOptions.STREAM_MODE)
    .build()

// Accurate pose detector on static images, when depending on the pose-detection-accurate sdk
val options = AccuratePoseDetectorOptions.Builder()
    .setDetectorMode(AccuratePoseDetectorOptions.SINGLE_IMAGE_MODE)
    .build()

Java

// Base pose detector with streaming frames, when depending on the pose-detection sdk
PoseDetectorOptions options =
   new PoseDetectorOptions.Builder()
       .setDetectorMode(PoseDetectorOptions.STREAM_MODE)
       .build();

// Accurate pose detector on static images, when depending on the pose-detection-accurate sdk
AccuratePoseDetectorOptions options =
   new AccuratePoseDetectorOptions.Builder()
       .setDetectorMode(AccuratePoseDetectorOptions.SINGLE_IMAGE_MODE)
       .build();

最后,创建 PoseDetector 的实例。传递您指定的选项:

Kotlin

val poseDetector = PoseDetection.getClient(options)

Java

PoseDetector poseDetector = PoseDetection.getClient(options);

2. 准备输入图片

如需检测图片中的姿势,请基于设备上的以下资源创建一个 InputImage 对象:Bitmapmedia.ImageByteBuffer、字节数组或文件。然后,将 InputImage 对象传递给 PoseDetector

如需进行姿势检测,您使用的图片尺寸应至少为 480x360 像素。如果您正在实时检测姿势,以此最低分辨率捕获帧有助于缩短延迟时间。

您可以基于不同来源创建 InputImage 对象,下文分别介绍了具体方法。

使用 media.Image

如需基于 media.Image 对象创建 InputImage 对象(例如从设备的相机捕获图片时),请将 media.Image 对象和图片的旋转角度传递给 InputImage.fromMediaImage()

如果您使用 CameraX 库,OnImageCapturedListenerImageAnalysis.Analyzer 类会为您计算旋转角度值。

Kotlin

private class YourImageAnalyzer : ImageAnalysis.Analyzer {

    override fun analyze(imageProxy: ImageProxy) {
        val mediaImage = imageProxy.image
        if (mediaImage != null) {
            val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
            // Pass image to an ML Kit Vision API
            // ...
        }
    }
}

Java

private class YourAnalyzer implements ImageAnalysis.Analyzer {

    @Override
    public void analyze(ImageProxy imageProxy) {
        Image mediaImage = imageProxy.getImage();
        if (mediaImage != null) {
          InputImage image =
                InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees());
          // Pass image to an ML Kit Vision API
          // ...
        }
    }
}

如果您不使用可提供图片旋转角度的相机库,则可以根据设备的旋转角度和设备中相机传感器的朝向来计算旋转角度:

Kotlin

private val ORIENTATIONS = SparseIntArray()

init {
    ORIENTATIONS.append(Surface.ROTATION_0, 0)
    ORIENTATIONS.append(Surface.ROTATION_90, 90)
    ORIENTATIONS.append(Surface.ROTATION_180, 180)
    ORIENTATIONS.append(Surface.ROTATION_270, 270)
}

/**
 * Get the angle by which an image must be rotated given the device's current
 * orientation.
 */
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
@Throws(CameraAccessException::class)
private fun getRotationCompensation(cameraId: String, activity: Activity, isFrontFacing: Boolean): Int {
    // Get the device's current rotation relative to its "native" orientation.
    // Then, from the ORIENTATIONS table, look up the angle the image must be
    // rotated to compensate for the device's rotation.
    val deviceRotation = activity.windowManager.defaultDisplay.rotation
    var rotationCompensation = ORIENTATIONS.get(deviceRotation)

    // Get the device's sensor orientation.
    val cameraManager = activity.getSystemService(CAMERA_SERVICE) as CameraManager
    val sensorOrientation = cameraManager
            .getCameraCharacteristics(cameraId)
            .get(CameraCharacteristics.SENSOR_ORIENTATION)!!

    if (isFrontFacing) {
        rotationCompensation = (sensorOrientation + rotationCompensation) % 360
    } else { // back-facing
        rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360
    }
    return rotationCompensation
}

Java

private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
static {
    ORIENTATIONS.append(Surface.ROTATION_0, 0);
    ORIENTATIONS.append(Surface.ROTATION_90, 90);
    ORIENTATIONS.append(Surface.ROTATION_180, 180);
    ORIENTATIONS.append(Surface.ROTATION_270, 270);
}

/**
 * Get the angle by which an image must be rotated given the device's current
 * orientation.
 */
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
private int getRotationCompensation(String cameraId, Activity activity, boolean isFrontFacing)
        throws CameraAccessException {
    // Get the device's current rotation relative to its "native" orientation.
    // Then, from the ORIENTATIONS table, look up the angle the image must be
    // rotated to compensate for the device's rotation.
    int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
    int rotationCompensation = ORIENTATIONS.get(deviceRotation);

    // Get the device's sensor orientation.
    CameraManager cameraManager = (CameraManager) activity.getSystemService(CAMERA_SERVICE);
    int sensorOrientation = cameraManager
            .getCameraCharacteristics(cameraId)
            .get(CameraCharacteristics.SENSOR_ORIENTATION);

    if (isFrontFacing) {
        rotationCompensation = (sensorOrientation + rotationCompensation) % 360;
    } else { // back-facing
        rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360;
    }
    return rotationCompensation;
}

然后,将 media.Image 对象及其旋转角度值传递给 InputImage.fromMediaImage()

Kotlin

val image = InputImage.fromMediaImage(mediaImage, rotation)

Java

InputImage image = InputImage.fromMediaImage(mediaImage, rotation);

使用文件 URI

如需基于文件 URI 创建 InputImage 对象,请将应用上下文和文件 URI 传递给 InputImage.fromFilePath()。如果您使用 ACTION_GET_CONTENT intent 提示用户从图库应用中选择图片,则这一操作非常有用。

Kotlin

val image: InputImage
try {
    image = InputImage.fromFilePath(context, uri)
} catch (e: IOException) {
    e.printStackTrace()
}

Java

InputImage image;
try {
    image = InputImage.fromFilePath(context, uri);
} catch (IOException e) {
    e.printStackTrace();
}

使用 ByteBufferByteArray

如需基于 ByteBufferByteArray 创建 InputImage 对象,请先按照之前针对 media.Image 输入的说明计算图片旋转角度。然后,使用缓冲区或数组以及图片的高度、宽度、颜色编码格式和旋转角度创建 InputImage 对象:

Kotlin

val image = InputImage.fromByteBuffer(
        byteBuffer,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
)
// Or:
val image = InputImage.fromByteArray(
        byteArray,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
)

Java

InputImage image = InputImage.fromByteBuffer(byteBuffer,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
);
// Or:
InputImage image = InputImage.fromByteArray(
        byteArray,
        /* image width */480,
        /* image height */360,
        rotation,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
);

使用 Bitmap

如需基于 Bitmap 对象创建 InputImage 对象,请进行以下声明:

Kotlin

val image = InputImage.fromBitmap(bitmap, 0)

Java

InputImage image = InputImage.fromBitmap(bitmap, rotationDegree);

图片由 Bitmap 对象以及旋转角度表示。

3. 处理图片

将准备好的 InputImage 对象传递给 PoseDetectorprocess 方法。

Kotlin

Task<Pose> result = poseDetector.process(image)
       .addOnSuccessListener { results ->
           // Task completed successfully
           // ...
       }
       .addOnFailureListener { e ->
           // Task failed with an exception
           // ...
       }

Java

Task<Pose> result =
        poseDetector.process(image)
                .addOnSuccessListener(
                        new OnSuccessListener<Pose>() {
                            @Override
                            public void onSuccess(Pose pose) {
                                // Task completed successfully
                                // ...
                            }
                        })
                .addOnFailureListener(
                        new OnFailureListener() {
                            @Override
                            public void onFailure(@NonNull Exception e) {
                                // Task failed with an exception
                                // ...
                            }
                        });

4. 获取有关检测到的姿势的信息

如果在图片中检测到人物,姿势检测 API 会返回一个包含 33 个 PoseLandmarkPose 对象。

如果人物并未完全位于图片内部,模型会在帧外分配缺失的地标坐标,并为它们提供较低的 InFrameConfidence 值。

如果在帧中未检测到任何人,则 Pose 对象不包含 PoseLandmark

Kotlin

// Get all PoseLandmarks. If no person was detected, the list will be empty
val allPoseLandmarks = pose.getAllPoseLandmarks()

// Or get specific PoseLandmarks individually. These will all be null if no person
// was detected
val leftShoulder = pose.getPoseLandmark(PoseLandmark.LEFT_SHOULDER)
val rightShoulder = pose.getPoseLandmark(PoseLandmark.RIGHT_SHOULDER)
val leftElbow = pose.getPoseLandmark(PoseLandmark.LEFT_ELBOW)
val rightElbow = pose.getPoseLandmark(PoseLandmark.RIGHT_ELBOW)
val leftWrist = pose.getPoseLandmark(PoseLandmark.LEFT_WRIST)
val rightWrist = pose.getPoseLandmark(PoseLandmark.RIGHT_WRIST)
val leftHip = pose.getPoseLandmark(PoseLandmark.LEFT_HIP)
val rightHip = pose.getPoseLandmark(PoseLandmark.RIGHT_HIP)
val leftKnee = pose.getPoseLandmark(PoseLandmark.LEFT_KNEE)
val rightKnee = pose.getPoseLandmark(PoseLandmark.RIGHT_KNEE)
val leftAnkle = pose.getPoseLandmark(PoseLandmark.LEFT_ANKLE)
val rightAnkle = pose.getPoseLandmark(PoseLandmark.RIGHT_ANKLE)
val leftPinky = pose.getPoseLandmark(PoseLandmark.LEFT_PINKY)
val rightPinky = pose.getPoseLandmark(PoseLandmark.RIGHT_PINKY)
val leftIndex = pose.getPoseLandmark(PoseLandmark.LEFT_INDEX)
val rightIndex = pose.getPoseLandmark(PoseLandmark.RIGHT_INDEX)
val leftThumb = pose.getPoseLandmark(PoseLandmark.LEFT_THUMB)
val rightThumb = pose.getPoseLandmark(PoseLandmark.RIGHT_THUMB)
val leftHeel = pose.getPoseLandmark(PoseLandmark.LEFT_HEEL)
val rightHeel = pose.getPoseLandmark(PoseLandmark.RIGHT_HEEL)
val leftFootIndex = pose.getPoseLandmark(PoseLandmark.LEFT_FOOT_INDEX)
val rightFootIndex = pose.getPoseLandmark(PoseLandmark.RIGHT_FOOT_INDEX)
val nose = pose.getPoseLandmark(PoseLandmark.NOSE)
val leftEyeInner = pose.getPoseLandmark(PoseLandmark.LEFT_EYE_INNER)
val leftEye = pose.getPoseLandmark(PoseLandmark.LEFT_EYE)
val leftEyeOuter = pose.getPoseLandmark(PoseLandmark.LEFT_EYE_OUTER)
val rightEyeInner = pose.getPoseLandmark(PoseLandmark.RIGHT_EYE_INNER)
val rightEye = pose.getPoseLandmark(PoseLandmark.RIGHT_EYE)
val rightEyeOuter = pose.getPoseLandmark(PoseLandmark.RIGHT_EYE_OUTER)
val leftEar = pose.getPoseLandmark(PoseLandmark.LEFT_EAR)
val rightEar = pose.getPoseLandmark(PoseLandmark.RIGHT_EAR)
val leftMouth = pose.getPoseLandmark(PoseLandmark.LEFT_MOUTH)
val rightMouth = pose.getPoseLandmark(PoseLandmark.RIGHT_MOUTH)

Java

// Get all PoseLandmarks. If no person was detected, the list will be empty
List<PoseLandmark> allPoseLandmarks = pose.getAllPoseLandmarks();

// Or get specific PoseLandmarks individually. These will all be null if no person
// was detected
PoseLandmark leftShoulder = pose.getPoseLandmark(PoseLandmark.LEFT_SHOULDER);
PoseLandmark rightShoulder = pose.getPoseLandmark(PoseLandmark.RIGHT_SHOULDER);
PoseLandmark leftElbow = pose.getPoseLandmark(PoseLandmark.LEFT_ELBOW);
PoseLandmark rightElbow = pose.getPoseLandmark(PoseLandmark.RIGHT_ELBOW);
PoseLandmark leftWrist = pose.getPoseLandmark(PoseLandmark.LEFT_WRIST);
PoseLandmark rightWrist = pose.getPoseLandmark(PoseLandmark.RIGHT_WRIST);
PoseLandmark leftHip = pose.getPoseLandmark(PoseLandmark.LEFT_HIP);
PoseLandmark rightHip = pose.getPoseLandmark(PoseLandmark.RIGHT_HIP);
PoseLandmark leftKnee = pose.getPoseLandmark(PoseLandmark.LEFT_KNEE);
PoseLandmark rightKnee = pose.getPoseLandmark(PoseLandmark.RIGHT_KNEE);
PoseLandmark leftAnkle = pose.getPoseLandmark(PoseLandmark.LEFT_ANKLE);
PoseLandmark rightAnkle = pose.getPoseLandmark(PoseLandmark.RIGHT_ANKLE);
PoseLandmark leftPinky = pose.getPoseLandmark(PoseLandmark.LEFT_PINKY);
PoseLandmark rightPinky = pose.getPoseLandmark(PoseLandmark.RIGHT_PINKY);
PoseLandmark leftIndex = pose.getPoseLandmark(PoseLandmark.LEFT_INDEX);
PoseLandmark rightIndex = pose.getPoseLandmark(PoseLandmark.RIGHT_INDEX);
PoseLandmark leftThumb = pose.getPoseLandmark(PoseLandmark.LEFT_THUMB);
PoseLandmark rightThumb = pose.getPoseLandmark(PoseLandmark.RIGHT_THUMB);
PoseLandmark leftHeel = pose.getPoseLandmark(PoseLandmark.LEFT_HEEL);
PoseLandmark rightHeel = pose.getPoseLandmark(PoseLandmark.RIGHT_HEEL);
PoseLandmark leftFootIndex = pose.getPoseLandmark(PoseLandmark.LEFT_FOOT_INDEX);
PoseLandmark rightFootIndex = pose.getPoseLandmark(PoseLandmark.RIGHT_FOOT_INDEX);
PoseLandmark nose = pose.getPoseLandmark(PoseLandmark.NOSE);
PoseLandmark leftEyeInner = pose.getPoseLandmark(PoseLandmark.LEFT_EYE_INNER);
PoseLandmark leftEye = pose.getPoseLandmark(PoseLandmark.LEFT_EYE);
PoseLandmark leftEyeOuter = pose.getPoseLandmark(PoseLandmark.LEFT_EYE_OUTER);
PoseLandmark rightEyeInner = pose.getPoseLandmark(PoseLandmark.RIGHT_EYE_INNER);
PoseLandmark rightEye = pose.getPoseLandmark(PoseLandmark.RIGHT_EYE);
PoseLandmark rightEyeOuter = pose.getPoseLandmark(PoseLandmark.RIGHT_EYE_OUTER);
PoseLandmark leftEar = pose.getPoseLandmark(PoseLandmark.LEFT_EAR);
PoseLandmark rightEar = pose.getPoseLandmark(PoseLandmark.RIGHT_EAR);
PoseLandmark leftMouth = pose.getPoseLandmark(PoseLandmark.LEFT_MOUTH);
PoseLandmark rightMouth = pose.getPoseLandmark(PoseLandmark.RIGHT_MOUTH);

效果提升技巧

结果的质量取决于输入图片的质量:

  • 为了使机器学习套件准确检测姿势,图片中的人物应由足够的像素数据表示;为达到最佳效果,对象应至少为 256x256 像素。
  • 如果您在实时应用中检测姿势,可能还需要考虑输入图片的整体尺寸。较小的图片的处理速度可能更快,因此,为了缩短延迟时间,请以较低的分辨率捕获图片,但请注意上述分辨率要求,并确保正文在图片中占据尽可能多的空间。
  • 图片聚焦不佳也会影响准确性。如果您没有获得可接受的结果,请让用户重新捕获图像。

如果要在实时应用中使用姿势检测,请遵循以下准则以实现最佳帧速率:

  • 使用基础姿势检测 SDK 和 STREAM_MODE
  • 建议以较低的分辨率捕获图片。但是,您也要牢记此 API 的图片尺寸要求。
  • 如果您使用 Cameracamera2 API,请限制对检测器的调用。如果在检测器运行时有新的视频帧可用,请丢弃该帧。如需查看示例,请参阅快速入门示例应用中的 VisionProcessorBase 类。
  • 如果您使用 CameraX API,请确保背压策略设置为默认值 ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST。 这样可以保证每次只传送一张图片进行分析。如果在分析器处于忙碌状态时生成了更多图片,这些图片将被自动丢弃,不会排队等待传递。通过调用 ImageProxy.close() 关闭正在分析的图片后,将传送下一张最新图片。
  • 如果您使用检测器的输出在输入图片上叠加图形,请先从机器学习套件获取结果,然后在一个步骤中完成图片的呈现和叠加。这样一来,每个输入帧只会在显示表面呈现一次。如需查看示例,请参阅快速入门示例应用中的 CameraSourcePreview GraphicOverlay 类。
  • 如果您使用 Camera2 API,请以 ImageFormat.YUV_420_888 格式捕获图片。如果您使用旧版 Camera API,请以 ImageFormat.NV21 格式捕获图片。

后续步骤

  • 如需了解如何使用姿势特征点对姿势进行分类,请参阅姿势分类提示