Machine Learning in Earth Engine

Machine Learning (ML) in Earth Engine is supported with:

  • EE API methods in the ee.Classifier, ee.Clusterer, or ee.Reducer packages for training and inference within Earth Engine.
  • Export and import functions for TFRecord files to facilitate TensorFlow model development. Inference using data in Earth Engine and a trained model hosted on Google's AI Platform is supported with the ee.Model package.

EE API methods

Training and inference using ee.Classifier or ee.Clusterer is generally effective up to a request size of approximately 100 megabytes. As a very rough guideline, assuming 32-bit (i.e. float) precision, this can accommodate training datasets that satisfy (where n is number of examples and b is the number of bands):

nb ≤ (100 * 2 20) / 4

This is only an approximate guideline due to additional overhead around the request, but note that for b = 100 (i.e. you have 100 properties used for prediction), n ≅ 200,000. Since Earth Engine processes 256x256 image tiles, inference requests on imagery must have b < 400 (again assuming 32-bit precision of the imagery). Examples of machine learning using the Earth Engine API can be found on the Supervised Classification page or the Unsupervised Classification page. Regression is generally performed with an ee.Reducer as described on this page, but see also ee.Reducer.RidgeRegression.

TensorFlow

If you require more complex models, larger training datasets, more input properties or longer training times, then TensorFlow is a better option. TensorFlow models are developed, trained and deployed outside Earth Engine. For easier interoperability, the Earth Engine API provides methods to import/export data in TFRecord format. This facilitates generating training/evaluation data in Earth Engine and exporting them to a format where they can be readily consumed by a TensorFlow model. To perform prediction with a trained TensorFlow model, you can either export imagery in TFRecord format then import the predictions (also in TFRecord) to Earth Engine, or you can deploy your trained model to Google AI Platform and perform inference directly in Earth Engine using ee.Model.fromAiPlatformPredictor.

See the TensorFlow page for details and example workflows.