Google is committed to advancing racial equity for Black communities. See how.

Use Depth in your Unity app

The Depth API helps a device’s camera to understand the size and shape of the real objects in a scene. It uses the camera to create depth images, or depth maps, thereby adding a layer of AR realism into your apps. You can use the information provided by a depth image to make virtual objects accurately appear in front of or behind real world objects, enabling immersive and realistic user experiences.

Depth information is calculated from motion and combined with information from a hardware depth sensor, such as a time-of-flight (ToF) sensor, if available. A device does not need a ToF sensor to support the Depth API.

Prerequisites

Make sure that you understand fundamental AR concepts and how to configure an ARCore session before proceeding.

Restrict access to Depth-supported devices

If your app requires Depth API support, either because a core part of the AR experience relies on depth, or because there's no graceful fallback for the parts of the app that use depth, you may choose to restrict distribution of your app in the Google Play Store to devices that support the Depth API by adding the following line to your AndroidManifest.xml:

<uses-feature android:name="com.google.ar.core.depth" />

Enable Depth

In a new ARCore session, check whether a user's device supports Depth. Not all ARCore-compatible devices support the Depth API due to processing power constraints. To save resources, depth is disabled by default on ARCore. Enable depth mode to have your app use the Depth API.

// Create the ARCore session.
var session = new Session();

// Check whether the user's device supports the Depth API.
if (session.IsDepthModeSupported(DepthMode.Automatic))
{
  // If depth mode is available on the user's device, perform
  // the steps you want here.
}

Acquire depth images

Get the depth image for the current frame.

if (Frame.CameraImage.UpdateDepthTexture(ref DepthTexture) == DepthStatus.Success)
{
   // Use the texture in the material.
   m_Material.SetTexture(k_DepthTexturePropertyName, DepthTexture);
}

Understand depth values

Given point A on the observed real-world geometry and a 2D point a representing the same point in the depth image, the value given by the Depth API at a is equal to the length of CA projected onto the principal axis. This can also be referred as the z-coordinate of A relative to the camera origin C. When working with the Depth API, it is important to understand that the depth values are not the length of the ray CA itself, but the projection of it.

Visualize depth data

We provide a shader that visualizes the depth image. To see this effect, attach DepthPreview.prefab to the main camera. DepthTexture.cs is a component in the depth preview object. CameraColorRampShader.shader will use the depth image to render a visualization, using a color ramp to represent different distances from the camera.

Occlude virtual objects

To occlude virtual images, include DepthEffect as a component to the main camera. DepthEffect will retrieve the latest depth image and attach it, along with several properties, to the material used by this object. The OcclusionImageEffect.shader will use the depth image to render the object with realistic occlusion.

You can render occlusion using two-pass rendering or per-object, forward-pass rendering. The efficiency of each approach depends on the complexity of the scene and other app-specific considerations.

Per-object, forward-pass rendering

Per-object, forward-pass rendering determines the occlusion of each pixel of the object in its material shader. If the pixels are not visible, they are clipped, typically via alpha blending, thus simulating occlusion on the user’s device.

Two-pass rendering

With two-pass rendering, the first pass renders all of the virtual content into an intermediary buffer. The second pass blends the virtual scene onto the background based on the difference between the real-world depth with the virtual scene depth. This approach requires no additional object-specific shader work and generally produces more uniform-looking results than the forward-pass method.

What’s next