增强型人脸
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
通过增强人脸,您的应用可以自动识别检测到的人脸的不同区域,并使用这些区域以叠加方式(例如纹理和模型)正确匹配个人人脸的轮廓和区域。
增强型人脸的工作原理是什么?
AugmentedFaces 示例应用同时使用模型的资源和纹理将狐狸的面部特征叠加到用户的面部。
![]()
3D 模型由两只狐耳和狐鼻组成。每块骨头都是一个单独的骨头,可以单独移动以追踪它们附加到的面部区域:
![]()
纹理由眼影、雀斑和其他颜色组成:
![]()
当您运行示例应用时,它会调用 API 来检测人脸,并将纹理和模型叠加到人脸上。
识别增强的人脸网格
为了在检测到的人脸上正确叠加纹理和 3D 模型,ARCore 提供了检测到的区域和增强的人脸网格。此网格是人脸的虚拟表示形式,由顶点、面部区域和用户头部的中心组成。请注意,对于 Sceneform,网格的方向有所不同。
![]()
摄像头检测到用户的人脸时,ARCore 会执行以下步骤,以生成增强的人脸网格,以及中心和区域姿态:
它用于标识中心姿势和人脸网格。
- 中心姿势位于鼻子后面,是用户头部的物理中心点(即在骷髅头内)。
![]()
- 面网由构成数百个顶点的顶点组成,并且相对于中心姿态定义。
![]()
AugmentedFace
类使用人脸网格和中心姿势识别用户人脸上的人脸区域姿势。这些区域包括:
- 左前额 (
LEFT_FOREHEAD
)
- 右前额 (
RIGHT_FOREHEAD
)
- 鼻尖 (
NOSE_TIP
)
这些元素(中心姿势、人脸网格和人脸区域姿态)构成了增强的人脸网格,并且 AugmentedFace
API 会使用这些元素作为定位点和区域来将资源放置在您的应用中。
后续步骤
开始在您自己的应用中使用增强型人脸。如需了解详情,请参阅以下资源:
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2022-09-26。
[null,null,["最后更新时间 (UTC):2022-09-26。"],[[["\u003cp\u003eAugmented Faces automatically identifies face regions to overlay assets like textures and models, realistically conforming to individual faces.\u003c/p\u003e\n"],["\u003cp\u003eIt utilizes a 3D model with movable bones (e.g., ears, nose) and a texture for features like eye shadow, freckles, etc., to augment the user's face.\u003c/p\u003e\n"],["\u003cp\u003eARCore provides an augmented face mesh, consisting of vertices, facial regions, and the head's center, for precise overlay placement.\u003c/p\u003e\n"],["\u003cp\u003eThe process involves detecting the center pose, creating a face mesh, and identifying face region poses (forehead, nose tip) for asset positioning.\u003c/p\u003e\n"],["\u003cp\u003eDevelopers can leverage Augmented Faces by creating specific assets and using the Sceneform developer guide for implementation.\u003c/p\u003e\n"]]],["Augmented Faces utilizes ARCore to detect a user's face and overlay digital assets. ARCore identifies the face's center pose and generates a face mesh composed of vertices. Using this mesh and center pose, the system determines region poses, such as the left and right forehead, and the nose tip. These elements help position assets like textures (eye shadow, freckles) and 3D models (fox ears, nose) onto the face, allowing the system to move these elements with the face, resulting in a proper fit.\n"],null,["# Augmented Faces allows your app to automatically identify different\nregions of a detected face, and use those regions to overlay assets such as\ntextures and models in a way that properly matches the contours and regions of\nan individual face.\n\nHow does Augmented Faces work?\n------------------------------\n\nThe [**AugmentedFaces**](//github.com/google-ar/sceneform-android-sdk/tree/v1.15.0/samples/augmentedfaces) sample\napp overlays the facial features of a fox onto a user's face using both the\nassets of a model and a texture.\n\nThe 3D model consists of two fox ears and a fox nose. Each is a separate [bone](//en.wikipedia.org/wiki/Skeletal_animation)\nthat can be moved individually to follow the facial region they are attached to:\n\nThe texture consists of eye shadow, freckles, and other coloring:\n\nWhen you run the sample app, it calls APIs to detect a face and overlays both the texture and the models onto the face.\n\nIdentifying an augmented face mesh\n----------------------------------\n\nIn order to properly overlay textures and 3D models on a detected face, ARCore\nprovides detected regions and an *augmented face mesh* . This mesh\nis a virtual representation of the face, and consists of the vertices, facial\nregions, and the center of the user's head. Note that the\n[orientation](/sceneform/develop/augmented-faces/developer-guide#face_mesh_orientation)\nof the mesh is different for Sceneform.\n\nWhen a user's face is detected by the camera, ARCore performs these steps to\ngenerate the augmented face mesh, as well as center and region poses:\n\n1. It identifies the *center pose* and a *face mesh*.\n\n - The center pose, located behind the nose, is the physical center point of the user's head (in other words, inside the skull).\n\n - The face mesh consists of hundreds of vertices that make up the face, and is defined relative to the center pose. \n\n2. The `AugmentedFace` class uses the face mesh and center pose to identify\n *face region poses* on the user's face. These regions are:\n\n - Left forehead (`LEFT_FOREHEAD`)\n - Right forehead (`RIGHT_FOREHEAD`)\n - Tip of the nose (`NOSE_TIP`)\n\nThese elements -- the center pose, face mesh, and face region poses -- comprise\nthe augmented face mesh and are used by `AugmentedFace` APIs as positioning\npoints and regions to place the assets in your app.\n\nNext steps\n----------\n\nStart using Augmented Faces in your own apps. To learn more, see:\n\n- [Creating assets for Augmented Faces](/sceneform/develop/augmented-faces/creating-assets)\n\n- [Augmented Faces developer guide for Sceneform](/sceneform/develop/augmented-faces/developer-guide)"]]