遮蔽,也就是在現實世界物體後方準確算繪虛擬物體,是打造身歷其境 AR 體驗的關鍵。舉例來說,假設使用者想將虛擬 Andy 放入場景,而場景中含有門旁的後車廂。在沒有遮蔽的情況下算繪,Andy 會與樹幹邊緣重疊,看起來不自然。如果您使用場景的深度,並瞭解虛擬 Andy 相對於木箱樹幹等周遭物體的距離,就能準確算出 Andy 的遮蔽率,讓 Andy 在周遭環境中看起來更逼真。
[null,null,["上次更新時間:2025-07-26 (世界標準時間)。"],[[["\u003cp\u003eThe Depth API enables more realistic AR experiences by letting your app understand the size and shape of real-world objects using depth images.\u003c/p\u003e\n"],["\u003cp\u003eThis API powers features like object occlusion, environmental effects, depth-of-field blurring, improved physics, and more accurate hit-tests.\u003c/p\u003e\n"],["\u003cp\u003eDepth data is generated from device motion and potentially hardware sensors like ToF, providing 3D scene understanding with varying levels of accuracy based on distance and surface features.\u003c/p\u003e\n"],["\u003cp\u003eDevices require sufficient processing power and manual Depth API activation, with compatibility varying based on hardware capabilities.\u003c/p\u003e\n"],["\u003cp\u003eDevelopers can explore the ARCore Depth Lab for practical examples and integrate the API across Android, Android NDK, Unity, and Unreal Engine platforms.\u003c/p\u003e\n"]]],[],null,["# Depth adds realism\n\n**Platform-specific guides** \n\n### Android (Kotlin/Java)\n\n- [Quickstart](/ar/develop/java/depth/quickstart)\n- [Developer guide](/ar/develop/java/depth/developer-guide)\n- [Raw depth](/ar/develop/java/depth/raw-depth)\n- [Geospatial depth](/ar/develop/java/depth/geospatial-depth)\n\n### Android NDK (C)\n\n- [Quickstart](/ar/develop/c/depth/quickstart)\n- [Developer guide](/ar/develop/c/depth/developer-guide)\n- [Raw depth](/ar/develop/c/depth/raw-depth)\n- [Geospatial depth](/ar/develop/c/depth/geospatial-depth)\n\n### Unity (AR Foundation)\n\n- [Developer guide](/ar/develop/unity-arf/depth/developer-guide)\n- [Raw depth](/ar/develop/unity-arf/depth/raw-depth)\n- [Geospatial depth](/ar/develop/unity-arf/depth/geospatial-depth)\n\n### Unreal Engine\n\n- [ARCore SDK for Unreal Engine (official documentation)](https://docs.unrealengine.com/5.0/en-US/developing-for-arcore-in-unreal-engine/)\n\nYour browser does not support the video tag.\n\nAs an AR app developer, you want to seamlessly blend the virtual with the real for your users. When a user places a virtual object in their scene, they want it to look like it belongs in the real world. If you're building an app for users to shop for furniture, you want them to be confident that the armchair they're about to buy will fit into their space.\n\nThe Depth API helps a device's camera to understand the size and shape of the real objects in a scene. It creates depth images, or depth maps, thereby adding a layer of realism into your apps. You can use the information provided by a depth image to enable immersive and realistic user experiences.\n\nUse cases for developing with the Depth API\n-------------------------------------------\n\nThe Depth API can power object occlusion, improved immersion, and novel interactions that enhance the realism of AR experiences. The following are some ways you can use it in your own projects. For examples of Depth in action, explore the sample scenes in the [ARCore Depth Lab](https://play.google.com/store/apps/details?id=com.google.ar.unity.arcore_depth_lab), which demonstrates different ways to access depth data. This Unity app is open-source on [Github](https://github.com/googlesamples/arcore-depth-lab).\n\n### Enable occlusion\n\nOcclusion, or accurately rendering a virtual object behind real-world objects, is paramount to an immersive AR experience. Consider a virtual Andy that a user may want to place in a scene containing a trunk beside a door. Rendered without occlusion, the Andy will unrealistically overlap with the edge of the trunk. If you use the depth of a scene and understand how far away the virtual Andy is relative to surroundings like the wooden trunk, you can accurately render the Andy with occlusion, making it appear much more realistic in its surroundings.\n\n### Transform a scene\n\nShow your user into a new, immersive world by rendering virtual snowflakes to settle on the arms and pillows of their couches, or casting their living room in a misty fog. You can use Depth to create a scene where virtual lights interact, hide behind, and relight real objects.\n\nYour browser does not support the video tag.\n\n### Distance and depth of field\n\nNeed to show that something is far away? You can use the distance measurement and add depth-of-field effects, such as blurring out a background or foreground of a scene, with the Depth API.\n\nYour browser does not support the video tag.\n\n### Enable user interactions with AR objects\n\nAllow users to \"touch\" the world through your app by enabling virtual content to interact with the real world through collision and physics. Have virtual objects go over real-world obstacles, or have virtual paintballs hit and splatter onto a real-world tree. When you combine depth-based collision with game physics, you can make an experience come to life.\n\nYour browser does not support the video tag.\n\n### Improve hit-tests\n\nDepth can be used to improve hit-test results. Plane hit-tests only work on planar surfaces with texture, whereas depth hit-tests are more detailed and work even on non-planar and low-texture areas. This is because depth hit-tests use depth information from the scene to determine the correct depth and orientation of a point.\n\nIn the following example, the green Andys represent standard plane hit-tests and the red Andys represent depth hit-tests.\n\nYour browser does not support the video tag.\n\nDevice compatibility\n--------------------\n\nThe Depth API is only supported on devices with the processing power to support\ndepth, and it must be enabled manually in ARCore, as described in\n[Enable Depth](/ar/develop/unity-arf/depth/developer-guide#enable-depth).\n\nSome devices may also provide a hardware depth sensor, such as a time-of-flight\n(ToF) sensor. Refer to the [ARCore supported devices](/ar/devices) page for an\nup-to-date list of devices that support the Depth API and a list of devices that\nhave a supported hardware depth sensor, such as a ToF sensor.\n\nDepth images\n------------\n\nThe Depth API uses a depth-from-motion algorithm to create depth images, which give a 3D view of the world. Each pixel in a depth image is associated with a measurement of how far the scene is from the camera. This algorithm takes multiple device images from different angles and compares them to estimate the distance to every pixel as a user moves their phone. It selectively uses machine learning to increase depth processing, even with minimal motion from a user. It also takes advantage of any additional hardware a user's device might have. If the device has a dedicated depth sensor, such as ToF, the algorithm automatically merges data from all available sources. This enhances the existing depth image and enables depth even when the camera is not moving. It also provides better depth on surfaces with few or no features, such as white walls, or in dynamic scenes with moving people or objects.\n\nThe following images show a camera image of a hallway with a bicycle on the wall, and a visualization of the depth image that is created from the camera images. Areas in red are closer to the camera, and areas in blue are farther away.\n\n### Depth from motion\n\nDepth data becomes available when the user moves their device. The algorithm can get robust, accurate depth estimates from 0 to 65 meters away. The most accurate results come when the device is half a meter to about five meters away from the real-world scene. Experiences that encourage the user to move their device more will get better and better results.\n\n### Acquire depth images\n\nWith the Depth API, you can retrieve depth images that match every camera frame. An acquired depth image has the same timestamp and field of view intrinsics as the camera. Valid depth data are only available after the user has started moving their device around, since depth is acquired from motion. Surfaces with few or no features, such as white walls, will be associated with imprecise depth.\n\nWhat's next\n-----------\n\n- Check out the [ARCore Depth Lab](https://github.com/googlesamples/arcore-depth-lab), which demonstrates different ways to access depth data."]]