在 Android 上使用 ML Kit 偵測臉部網格資訊

你可以使用 ML Kit 偵測自拍圖像或影片中的人臉。

臉部網格偵測 API
SDK 名稱face-mesh-detection
導入作業在建構期間,程式碼和資產會以靜態方式連結至您的應用程式。
應用程式大小影響約 6.4 MB
效能支援大多數裝置的即時功能。

馬上試試

事前準備

  1. 在專案層級的 build.gradle 檔案中,請務必在 buildscript 及所有專案區段中納入 Google 的 Maven 存放區。

  2. 將 ML Kit 臉部網格偵測程式庫的依附元件新增至模組的應用程式層級 Gradle 檔案,通常為 app/build.gradle

    dependencies {
     // ...
    
     implementation 'com.google.mlkit:face-mesh-detection:16.0.0-beta1'
    }
    

輸入圖片規範

  1. 距離裝置相機約 2 公尺 (7 英尺) 內應拍攝影像,確保臉孔足夠大,可進行臉部網格辨識。一般而言,錶面越大,臉部網格的辨識效果就越好。

  2. 臉部應面向相機鏡頭,且至少有一半的臉孔清楚可見。 臉部和相機之間的任何大物體都可能會降低準確度。

如要在即時應用程式中偵測臉孔,您也應考量輸入圖片的整體尺寸。較小的圖片可加快處理速度,因此擷取解析度較低的圖片可縮短延遲時間。但請留意上述準確率要求,並盡可能讓拍攝主體的臉部盡可能佔滿。

設定臉部網格偵測工具

如要變更任何臉部網格偵測工具的預設設定,請使用 FaceMeshDetectorOptions 物件指定這些設定。您可以變更下列設定:

  1. setUseCase

    • BOUNDING_BOX_ONLY:只為偵測到的臉孔網格提供定界框。這是速度最快的臉部偵測工具,但設有範圍限制(臉部必須保持在約 2 公尺或 7 英尺的範圍內)。

    • FACE_MESH (預設選項):提供定界框和其他臉部網格資訊 (468 個 3D 點和三角形資訊)。根據 Pixel 3 的測量結果,與 BOUNDING_BOX_ONLY 用途相比,延遲時間增加了約 15%。

例如:

Kotlin

val defaultDetector = FaceMeshDetection.getClient(
  FaceMeshDetectorOptions.DEFAULT_OPTIONS)

val boundingBoxDetector = FaceMeshDetection.getClient(
  FaceMeshDetectorOptions.Builder()
    .setUseCase(UseCase.BOUNDING_BOX_ONLY)
    .build()
)

Java

FaceMeshDetector defaultDetector =
        FaceMeshDetection.getClient(
                FaceMeshDetectorOptions.DEFAULT_OPTIONS);

FaceMeshDetector boundingBoxDetector = FaceMeshDetection.getClient(
        new FaceMeshDetectorOptions.Builder()
                .setUseCase(UseCase.BOUNDING_BOX_ONLY)
                .build()
        );

準備輸入圖片

如要偵測圖片中的臉孔,請透過 Bitmapmedia.ImageByteBuffer、位元組陣列或裝置上的檔案建立 InputImage 物件。接著,將 InputImage 物件傳遞至 FaceDetectorprocess 方法。

如要偵測臉孔網格偵測功能,建議使用尺寸至少為 480x360 像素的圖片。如果正在即時偵測臉孔,以這個最低解析度擷取影格有助於減少延遲。

您可以從不同來源建立 InputImage 物件,詳情請見下文。

使用 media.Image

如要從 media.Image 物件建立 InputImage 物件 (例如使用裝置相機拍照時),請將 media.Image 物件以及圖片的旋轉角度傳遞至 InputImage.fromMediaImage()

如果使用 CameraX 程式庫,OnImageCapturedListenerImageAnalysis.Analyzer 類別會為您計算旋轉值。

Kotlin

private class YourImageAnalyzer : ImageAnalysis.Analyzer {

    override fun analyze(imageProxy: ImageProxy) {
        val mediaImage = imageProxy.image
        if (mediaImage != null) {
            val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
            // Pass image to an ML Kit Vision API
            // ...
        }
    }
}

Java

private class YourAnalyzer implements ImageAnalysis.Analyzer {

    @Override
    public void analyze(ImageProxy imageProxy) {
        Image mediaImage = imageProxy.getImage();
        if (mediaImage != null) {
          InputImage image =
                InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees());
          // Pass image to an ML Kit Vision API
          // ...
        }
    }
}

如果您未使用相機程式庫提供圖像的旋轉角度,可以將裝置旋轉角度和裝置相機感應器方向做為計算依據:

Kotlin

private val ORIENTATIONS = SparseIntArray()

init {
    ORIENTATIONS.append(Surface.ROTATION_0, 0)
    ORIENTATIONS.append(Surface.ROTATION_90, 90)
    ORIENTATIONS.append(Surface.ROTATION_180, 180)
    ORIENTATIONS.append(Surface.ROTATION_270, 270)
}

/**
 * Get the angle by which an image must be rotated given the device's current
 * orientation.
 */
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
@Throws(CameraAccessException::class)
private fun getRotationCompensation(cameraId: String, activity: Activity, isFrontFacing: Boolean): Int {
    // Get the device's current rotation relative to its "native" orientation.
    // Then, from the ORIENTATIONS table, look up the angle the image must be
    // rotated to compensate for the device's rotation.
    val deviceRotation = activity.windowManager.defaultDisplay.rotation
    var rotationCompensation = ORIENTATIONS.get(deviceRotation)

    // Get the device's sensor orientation.
    val cameraManager = activity.getSystemService(CAMERA_SERVICE) as CameraManager
    val sensorOrientation = cameraManager
            .getCameraCharacteristics(cameraId)
            .get(CameraCharacteristics.SENSOR_ORIENTATION)!!

    if (isFrontFacing) {
        rotationCompensation = (sensorOrientation + rotationCompensation) % 360
    } else { // back-facing
        rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360
    }
    return rotationCompensation
}

Java

private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
static {
    ORIENTATIONS.append(Surface.ROTATION_0, 0);
    ORIENTATIONS.append(Surface.ROTATION_90, 90);
    ORIENTATIONS.append(Surface.ROTATION_180, 180);
    ORIENTATIONS.append(Surface.ROTATION_270, 270);
}

/**
 * Get the angle by which an image must be rotated given the device's current
 * orientation.
 */
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
private int getRotationCompensation(String cameraId, Activity activity, boolean isFrontFacing)
        throws CameraAccessException {
    // Get the device's current rotation relative to its "native" orientation.
    // Then, from the ORIENTATIONS table, look up the angle the image must be
    // rotated to compensate for the device's rotation.
    int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
    int rotationCompensation = ORIENTATIONS.get(deviceRotation);

    // Get the device's sensor orientation.
    CameraManager cameraManager = (CameraManager) activity.getSystemService(CAMERA_SERVICE);
    int sensorOrientation = cameraManager
            .getCameraCharacteristics(cameraId)
            .get(CameraCharacteristics.SENSOR_ORIENTATION);

    if (isFrontFacing) {
        rotationCompensation = (sensorOrientation + rotationCompensation) % 360;
    } else { // back-facing
        rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360;
    }
    return rotationCompensation;
}

然後,將 media.Image 物件和旋轉角度值傳遞至 InputImage.fromMediaImage()

Kotlin

val image = InputImage.fromMediaImage(mediaImage, rotation)

Java

InputImage image = InputImage.fromMediaImage(mediaImage, rotation);

使用檔案 URI

如要從檔案 URI 建立 InputImage 物件,請將應用程式結構定義和檔案 URI 傳遞至 InputImage.fromFilePath()。使用 ACTION_GET_CONTENT 意圖提示使用者從圖片庫應用程式中選取圖片時,這項功能就很實用。

Kotlin

val image: InputImage
try {
    image = InputImage.fromFilePath(context, uri)
} catch (e: IOException) {
    e.printStackTrace()
}

Java

InputImage image;
try {
    image = InputImage.fromFilePath(context, uri);
} catch (IOException e) {
    e.printStackTrace();
}

使用 ByteBufferByteArray

如要從 ByteBufferByteArray 建立 InputImage 物件,請先按照之前的 media.Image 輸入內容計算圖片旋轉角度。接著,使用緩衝區或陣列建立 InputImage 物件,以及圖片的高度、寬度、顏色編碼格式和旋轉角度:

Kotlin

val image = InputImage.fromByteBuffer(
        byteBuffer,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
)
// Or:
val image = InputImage.fromByteArray(
        byteArray,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
)

Java

InputImage image = InputImage.fromByteBuffer(byteBuffer,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
);
// Or:
InputImage image = InputImage.fromByteArray(
        byteArray,
        /* image width */480,
        /* image height */360,
        rotation,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
);

使用 Bitmap

如要從 Bitmap 物件建立 InputImage 物件,請宣告下列宣告:

Kotlin

val image = InputImage.fromBitmap(bitmap, 0)

Java

InputImage image = InputImage.fromBitmap(bitmap, rotationDegree);

圖像以 Bitmap 物件和旋轉角度表示。

處理圖片

將圖片傳遞至 process 方法:

Kotlin

val result = detector.process(image)
        .addOnSuccessListener { result ->
            // Task completed successfully
            // …
        }
        .addOnFailureListener { e ->
            // Task failed with an exception
            // …
        }

Java


Task<List<FaceMesh>> result = detector.process(image)
        .addOnSuccessListener(
                new OnSuccessListener<List<FaceMesh>>() {
                    @Override
                    public void onSuccess(List<FaceMesh> result) {
                        // Task completed successfully
                        // …
                    }
                })
        .addOnFailureListener(
                new OnFailureListener() {
                    @Override
                    Public void onFailure(Exception e) {
                        // Task failed with an exception
                        // …
                    }
                });

取得偵測到的臉部網格相關資訊

如果系統在圖片中偵測到任何臉孔,就會將 FaceMesh 物件清單傳遞至成功事件監聽器。每個 FaceMesh 都代表在圖片中偵測到的臉孔。您可在輸入圖片中取得每個臉部網格的邊界座標,以及您設定臉部網格偵測工具尋找的任何其他資訊。

Kotlin

for (faceMesh in faceMeshs) {
    val bounds: Rect = faceMesh.boundingBox()

    // Gets all points
    val faceMeshpoints = faceMesh.allPoints
    for (faceMeshpoint in faceMeshpoints) {
      val index: Int = faceMeshpoints.index()
      val position = faceMeshpoint.position
    }

    // Gets triangle info
    val triangles: List<Triangle<FaceMeshPoint>> = faceMesh.allTriangles
    for (triangle in triangles) {
      // 3 Points connecting to each other and representing a triangle area.
      val connectedPoints = triangle.allPoints()
    }
}

Java

for (FaceMesh faceMesh : faceMeshs) {
    Rect bounds = faceMesh.getBoundingBox();

    // Gets all points
    List<FaceMeshPoint> faceMeshpoints = faceMesh.getAllPoints();
    for (FaceMeshPoint faceMeshpoint : faceMeshpoints) {
        int index = faceMeshpoints.getIndex();
        PointF3D position = faceMeshpoint.getPosition();
    }

    // Gets triangle info
    List<Triangle<FaceMeshPoint>> triangles = faceMesh.getAllTriangles();
    for (Triangle<FaceMeshPoint> triangle : triangles) {
        // 3 Points connecting to each other and representing a triangle area.
        List<FaceMeshPoint> connectedPoints = triangle.getAllPoints();
    }
}