You can use the camera feed that ARCore captures in a machine learning pipeline with the ML Kit and the Google Cloud Vision API to identify real-world objects, and create an intelligent augmented reality experience.
The image at left is taken from the ARCore ML Kit sample, written in Kotlin for Android. This sample app uses a machine learning model to classify objects in the camera's view and attaches a label to the object in the virtual scene.
The ML Kit API provides for both Android and iOS development, and the Google Cloud Vision API has both REST and RPC interfaces, so you can achieve the same results as the ARCore ML Kit sample in your own app for iOS in Objective-C or Swift.
See Use ARCore as input for Machine Learning models for an overview of the patterns you need to implement. Then apply these to your app for iOS in Objective-C or Swift.