Google is committed to advancing racial equity for Black communities. See how.

Depth API developer guide for Unity

Learn how to use the Depth API in your own apps.


Make sure that you understand fundamental AR concepts and how to configure an ARCore session before proceeding.

Depth API-supported devices

If your app requires Depth API support, either because a core part of the AR experience relies on depth, or because there's no graceful fallback for the parts of the app that use depth, you may choose to restrict distribution of your app in the Google Play Store to devices that support the Depth API by adding the following line to your AndroidManifest.xml:

<uses-feature android:name="" />

Check if Depth API is supported

In a new ARCore session, check whether a user's device supports the Depth API.

    // Create the ARCore session.
    var session = new Session();

    // Check whether the user's device supports the Depth API.
    if (Session.IsDepthModeSupported(DepthMode.Automatic))
      // If depth mode is available on the user's device, perform
      // the steps you want here.

Retrieve depth maps

Get the depth map for the current frame.

    if (Frame.CameraImage.UpdateDepthTexture(ref DepthTexture) == DepthStatus.Success)
       // Use the texture in the material.
       m_Material.SetTexture(k_DepthTexturePropertyName, DepthTexture);

Occlude virtual objects

To occlude virtual images, include DepthEffect as a component to the main camera. DepthEffect will retrieve the latest depth map and attach it, along with several properties, to the material used by this object. The OcclusionImageEffect.shader will use the depth map to render the object with realistic occlusion.

Visualize the depth data

We provide a shader that visualizes the depth map. To see this effect, attach DepthPreview.prefab to the main camera. DepthTexture.cs is a component in the depth preview object. CameraColorRampShader.shader will use the depth map to render a visualization, using a color ramp to represent different distances from the camera.

Conventional versus alternative implementations of occlusion rendering

The HelloAR sample app uses a two-pass rendering configuration to simulate occlusion. The first (render) pass renders all of the virtual content into an intermediary buffer. The second pass uses the depth map to combine the virtual content with the real world camera. The efficiency of each approach depends on the complexity of the scene and other app-specific considerations. The two-pass approach requires no additional shader work.

An alternative way to render occlusion is to implement per-object, forward-pass rendering. As each object is rendered, the app uses the depth map to determine whether certain pixels in the virtual content are visible. If the pixels are not visible, they will be clipped, simulating occlusion on the user’s device.

We recommend tailoring the code in OcclusionImageEffect.shader and CameraColorRampShader.shader to your specific application.

Alternative uses of the Depth API

The HelloAR app uses the Depth API to create depth maps and simulate occlusion. Other uses for the Depth API include:

  • Collisions: virtual objects bouncing off of real world objects
  • Distance measurement
  • Re-lighting a scene
  • Re-texturing existing objects: turning a floor into lava
  • Depth-of-field effects: blurring out the background or foreground
  • Environmental effects: fog, rain, and snow

For more detail and best practices for applying occlusion in shaders, check out the HelloAR sample app.

Understanding depth values

Given point A on the observed real-world geometry and a 2D point a representing the same point in the depth image, the value given by the Depth API at a is equal to the length of CA projected onto the principal axis. This can also be referred as the z-coordinate of A relative to the camera origin C. When working with the Depth API, it is important to understand that the depth values are not the length of the ray CA itself, but the projection of it.