作为 AR 应用开发者,您希望为用户提供虚拟与现实无缝融合的体验。当用户在场景中放置虚拟对象时,希望该对象看起来像是真实世界中的物品。如果您要构建一款供用户选购家具的应用,那么您希望用户能够确信自己即将购买的扶手椅能否放置在其空间中。
Depth API 可帮助设备的相机了解场景中真实对象的大小和形状。它可以创建深度图片或深度图,从而为应用增添一层逼真效果。您可以使用深度图提供的信息打造逼真的沉浸式用户体验。
使用 Depth API 进行开发的用例
Depth API 可实现对象遮挡、增强沉浸感和全新互动,从而提升 AR 体验的真实感。以下是您可以在自己的项目中使用该库的一些方式。如需查看深度功能的实际运作示例,请探索 ARCore Depth Lab 中的示例场景,其中演示了访问深度数据的不同方式。此 Unity 应用是 GitHub 上的开源应用。
启用遮挡
遮挡(即准确渲染虚拟对象在真实对象后面)对于沉浸式 AR 体验至关重要。假设用户希望将虚拟 Andy 放置在一个场景中,该场景中门边有一个后备箱。如果不进行遮挡渲染,Andy 将与后备箱边缘不真实地重叠。如果您使用场景深度,并了解虚拟 Andy 相对于木箱等周围环境的距离,则可以准确地渲染带有遮挡效果的 Andy,使其在周围环境中看起来更加逼真。
Depth API 仅适用于具有支持深度处理能力的设备,并且必须在 ARCore 中手动启用,如启用深度中所述。
某些设备可能还提供硬件深度传感器,例如飞行时间 (ToF) 传感器。如需查看支持 Depth API 的设备的最新列表,以及具有受支持的硬件深度传感器(例如 ToF 传感器)的设备的列表,请参阅 ARCore 支持的设备页面。
深度图像
Depth API 使用“从运动估算深度”算法创建深度图像,以便用户以 3D 视图查看周围环境。深度图中的每个像素都与场景与相机之间的距离测量值相关联。此算法会从不同角度拍摄多张设备图片,并将其进行比较,以估算用户移动手机时与每个像素的距离。它会选择性地使用机器学习来提高深度处理能力,即使用户的动作非常轻微,也能达到理想效果。它还可以利用用户设备可能具有的任何其他硬件。如果设备配备专用深度传感器(例如 ToF),算法会自动合并来自所有可用来源的数据。这可以增强现有的深度图像,即使相机不移动也能实现深度感知。此外,它还可在特征较少或没有特征的表面(例如白墙)上或包含移动人物或物体的动态场景中提供更好的深度。
[null,null,["最后更新时间 (UTC):2025-07-26。"],[[["\u003cp\u003eThe Depth API enables more realistic AR experiences by letting your app understand the size and shape of real-world objects using depth images.\u003c/p\u003e\n"],["\u003cp\u003eThis API powers features like object occlusion, environmental effects, depth-of-field blurring, improved physics, and more accurate hit-tests.\u003c/p\u003e\n"],["\u003cp\u003eDepth data is generated from device motion and potentially hardware sensors like ToF, providing 3D scene understanding with varying levels of accuracy based on distance and surface features.\u003c/p\u003e\n"],["\u003cp\u003eDevices require sufficient processing power and manual Depth API activation, with compatibility varying based on hardware capabilities.\u003c/p\u003e\n"],["\u003cp\u003eDevelopers can explore the ARCore Depth Lab for practical examples and integrate the API across Android, Android NDK, Unity, and Unreal Engine platforms.\u003c/p\u003e\n"]]],[],null,["# Depth adds realism\n\n**Platform-specific guides** \n\n### Android (Kotlin/Java)\n\n- [Quickstart](/ar/develop/java/depth/quickstart)\n- [Developer guide](/ar/develop/java/depth/developer-guide)\n- [Raw depth](/ar/develop/java/depth/raw-depth)\n- [Geospatial depth](/ar/develop/java/depth/geospatial-depth)\n\n### Android NDK (C)\n\n- [Quickstart](/ar/develop/c/depth/quickstart)\n- [Developer guide](/ar/develop/c/depth/developer-guide)\n- [Raw depth](/ar/develop/c/depth/raw-depth)\n- [Geospatial depth](/ar/develop/c/depth/geospatial-depth)\n\n### Unity (AR Foundation)\n\n- [Developer guide](/ar/develop/unity-arf/depth/developer-guide)\n- [Raw depth](/ar/develop/unity-arf/depth/raw-depth)\n- [Geospatial depth](/ar/develop/unity-arf/depth/geospatial-depth)\n\n### Unreal Engine\n\n- [ARCore SDK for Unreal Engine (official documentation)](https://docs.unrealengine.com/5.0/en-US/developing-for-arcore-in-unreal-engine/)\n\nYour browser does not support the video tag.\n\nAs an AR app developer, you want to seamlessly blend the virtual with the real for your users. When a user places a virtual object in their scene, they want it to look like it belongs in the real world. If you're building an app for users to shop for furniture, you want them to be confident that the armchair they're about to buy will fit into their space.\n\nThe Depth API helps a device's camera to understand the size and shape of the real objects in a scene. It creates depth images, or depth maps, thereby adding a layer of realism into your apps. You can use the information provided by a depth image to enable immersive and realistic user experiences.\n\nUse cases for developing with the Depth API\n-------------------------------------------\n\nThe Depth API can power object occlusion, improved immersion, and novel interactions that enhance the realism of AR experiences. The following are some ways you can use it in your own projects. For examples of Depth in action, explore the sample scenes in the [ARCore Depth Lab](https://play.google.com/store/apps/details?id=com.google.ar.unity.arcore_depth_lab), which demonstrates different ways to access depth data. This Unity app is open-source on [Github](https://github.com/googlesamples/arcore-depth-lab).\n\n### Enable occlusion\n\nOcclusion, or accurately rendering a virtual object behind real-world objects, is paramount to an immersive AR experience. Consider a virtual Andy that a user may want to place in a scene containing a trunk beside a door. Rendered without occlusion, the Andy will unrealistically overlap with the edge of the trunk. If you use the depth of a scene and understand how far away the virtual Andy is relative to surroundings like the wooden trunk, you can accurately render the Andy with occlusion, making it appear much more realistic in its surroundings.\n\n### Transform a scene\n\nShow your user into a new, immersive world by rendering virtual snowflakes to settle on the arms and pillows of their couches, or casting their living room in a misty fog. You can use Depth to create a scene where virtual lights interact, hide behind, and relight real objects.\n\nYour browser does not support the video tag.\n\n### Distance and depth of field\n\nNeed to show that something is far away? You can use the distance measurement and add depth-of-field effects, such as blurring out a background or foreground of a scene, with the Depth API.\n\nYour browser does not support the video tag.\n\n### Enable user interactions with AR objects\n\nAllow users to \"touch\" the world through your app by enabling virtual content to interact with the real world through collision and physics. Have virtual objects go over real-world obstacles, or have virtual paintballs hit and splatter onto a real-world tree. When you combine depth-based collision with game physics, you can make an experience come to life.\n\nYour browser does not support the video tag.\n\n### Improve hit-tests\n\nDepth can be used to improve hit-test results. Plane hit-tests only work on planar surfaces with texture, whereas depth hit-tests are more detailed and work even on non-planar and low-texture areas. This is because depth hit-tests use depth information from the scene to determine the correct depth and orientation of a point.\n\nIn the following example, the green Andys represent standard plane hit-tests and the red Andys represent depth hit-tests.\n\nYour browser does not support the video tag.\n\nDevice compatibility\n--------------------\n\nThe Depth API is only supported on devices with the processing power to support\ndepth, and it must be enabled manually in ARCore, as described in\n[Enable Depth](/ar/develop/unity-arf/depth/developer-guide#enable-depth).\n\nSome devices may also provide a hardware depth sensor, such as a time-of-flight\n(ToF) sensor. Refer to the [ARCore supported devices](/ar/devices) page for an\nup-to-date list of devices that support the Depth API and a list of devices that\nhave a supported hardware depth sensor, such as a ToF sensor.\n\nDepth images\n------------\n\nThe Depth API uses a depth-from-motion algorithm to create depth images, which give a 3D view of the world. Each pixel in a depth image is associated with a measurement of how far the scene is from the camera. This algorithm takes multiple device images from different angles and compares them to estimate the distance to every pixel as a user moves their phone. It selectively uses machine learning to increase depth processing, even with minimal motion from a user. It also takes advantage of any additional hardware a user's device might have. If the device has a dedicated depth sensor, such as ToF, the algorithm automatically merges data from all available sources. This enhances the existing depth image and enables depth even when the camera is not moving. It also provides better depth on surfaces with few or no features, such as white walls, or in dynamic scenes with moving people or objects.\n\nThe following images show a camera image of a hallway with a bicycle on the wall, and a visualization of the depth image that is created from the camera images. Areas in red are closer to the camera, and areas in blue are farther away.\n\n### Depth from motion\n\nDepth data becomes available when the user moves their device. The algorithm can get robust, accurate depth estimates from 0 to 65 meters away. The most accurate results come when the device is half a meter to about five meters away from the real-world scene. Experiences that encourage the user to move their device more will get better and better results.\n\n### Acquire depth images\n\nWith the Depth API, you can retrieve depth images that match every camera frame. An acquired depth image has the same timestamp and field of view intrinsics as the camera. Valid depth data are only available after the user has started moving their device around, since depth is acquired from motion. Surfaces with few or no features, such as white walls, will be associated with imprecise depth.\n\nWhat's next\n-----------\n\n- Check out the [ARCore Depth Lab](https://github.com/googlesamples/arcore-depth-lab), which demonstrates different ways to access depth data."]]