- Sceneform SDK for Android was open sourced and archived (github.com/google-ar/sceneform-android-sdk) with version 1.16.0.
- This site (developers.google.com/sceneform) serves as the documentation archive for the previous version, Sceneform SDK for Android 1.15.0.
- Do not use version 1.17.0 of the Sceneform Maven artifacts.
- The 1.17.1 Maven artifacts can be used. Other than the version, however, the 1.17.1 artifacts are identical to the 1.15.0 artifacts.
Augmented Faces
Stay organized with collections
Save and categorize content based on your preferences.
Augmented Faces allows your app to automatically identify different
regions of a detected face, and use those regions to overlay assets such as
textures and models in a way that properly matches the contours and regions of
an individual face.
How does Augmented Faces work?
The AugmentedFaces sample
app overlays the facial features of a fox onto a user's face using both the
assets of a model and a texture.
![]()
The 3D model consists of two fox ears and a fox nose. Each is a separate bone
that can be moved individually to follow the facial region they are attached to:
![]()
The texture consists of eye shadow, freckles, and other coloring:
![]()
When you run the sample app, it calls APIs to detect a face and overlays both the texture and the models onto the face.
Identifying an augmented face mesh
In order to properly overlay textures and 3D models on a detected face, ARCore
provides detected regions and an augmented face mesh. This mesh
is a virtual representation of the face, and consists of the vertices, facial
regions, and the center of the user's head. Note that the
orientation
of the mesh is different for Sceneform.
![]()
When a user's face is detected by the camera, ARCore performs these steps to
generate the augmented face mesh, as well as center and region poses:
It identifies the center pose and a face mesh.
- The center pose, located behind the nose, is the physical center point of the user's head (in other words, inside the skull).
![]()
- The face mesh consists of hundreds of vertices that make up the face, and
is defined relative to the center pose.
![]()
The AugmentedFace
class uses the face mesh and center pose to identify
face region poses on the user's face. These regions are:
- Left forehead (
LEFT_FOREHEAD
)
- Right forehead (
RIGHT_FOREHEAD
)
- Tip of the nose (
NOSE_TIP
)
These elements -- the center pose, face mesh, and face region poses -- comprise
the augmented face mesh and are used by AugmentedFace
APIs as positioning
points and regions to place the assets in your app.
Next steps
Start using Augmented Faces in your own apps. To learn more, see:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-06-26 UTC.
[null,null,["Last updated 2024-06-26 UTC."],[[["\u003cp\u003eAugmented Faces automatically identifies face regions to overlay assets like textures and models, realistically conforming to individual faces.\u003c/p\u003e\n"],["\u003cp\u003eIt utilizes a 3D model with movable bones (e.g., ears, nose) and a texture for features like eye shadow, freckles, etc., to augment the user's face.\u003c/p\u003e\n"],["\u003cp\u003eARCore provides an augmented face mesh, consisting of vertices, facial regions, and the head's center, for precise overlay placement.\u003c/p\u003e\n"],["\u003cp\u003eThe process involves detecting the center pose, creating a face mesh, and identifying face region poses (forehead, nose tip) for asset positioning.\u003c/p\u003e\n"],["\u003cp\u003eDevelopers can leverage Augmented Faces by creating specific assets and using the Sceneform developer guide for implementation.\u003c/p\u003e\n"]]],["Augmented Faces utilizes ARCore to detect a user's face and overlay digital assets. ARCore identifies the face's center pose and generates a face mesh composed of vertices. Using this mesh and center pose, the system determines region poses, such as the left and right forehead, and the nose tip. These elements help position assets like textures (eye shadow, freckles) and 3D models (fox ears, nose) onto the face, allowing the system to move these elements with the face, resulting in a proper fit.\n"],null,["# Augmented Faces allows your app to automatically identify different\nregions of a detected face, and use those regions to overlay assets such as\ntextures and models in a way that properly matches the contours and regions of\nan individual face.\n\nHow does Augmented Faces work?\n------------------------------\n\nThe [**AugmentedFaces**](//github.com/google-ar/sceneform-android-sdk/tree/v1.15.0/samples/augmentedfaces) sample\napp overlays the facial features of a fox onto a user's face using both the\nassets of a model and a texture.\n\nThe 3D model consists of two fox ears and a fox nose. Each is a separate [bone](//en.wikipedia.org/wiki/Skeletal_animation)\nthat can be moved individually to follow the facial region they are attached to:\n\nThe texture consists of eye shadow, freckles, and other coloring:\n\nWhen you run the sample app, it calls APIs to detect a face and overlays both the texture and the models onto the face.\n\nIdentifying an augmented face mesh\n----------------------------------\n\nIn order to properly overlay textures and 3D models on a detected face, ARCore\nprovides detected regions and an *augmented face mesh* . This mesh\nis a virtual representation of the face, and consists of the vertices, facial\nregions, and the center of the user's head. Note that the\n[orientation](/sceneform/develop/augmented-faces/developer-guide#face_mesh_orientation)\nof the mesh is different for Sceneform.\n\nWhen a user's face is detected by the camera, ARCore performs these steps to\ngenerate the augmented face mesh, as well as center and region poses:\n\n1. It identifies the *center pose* and a *face mesh*.\n\n - The center pose, located behind the nose, is the physical center point of the user's head (in other words, inside the skull).\n\n - The face mesh consists of hundreds of vertices that make up the face, and is defined relative to the center pose. \n\n2. The `AugmentedFace` class uses the face mesh and center pose to identify\n *face region poses* on the user's face. These regions are:\n\n - Left forehead (`LEFT_FOREHEAD`)\n - Right forehead (`RIGHT_FOREHEAD`)\n - Tip of the nose (`NOSE_TIP`)\n\nThese elements -- the center pose, face mesh, and face region poses -- comprise\nthe augmented face mesh and are used by `AugmentedFace` APIs as positioning\npoints and regions to place the assets in your app.\n\nNext steps\n----------\n\nStart using Augmented Faces in your own apps. To learn more, see:\n\n- [Creating assets for Augmented Faces](/sceneform/develop/augmented-faces/creating-assets)\n\n- [Augmented Faces developer guide for Sceneform](/sceneform/develop/augmented-faces/developer-guide)"]]