mediapipe_model_maker.face_stylizer.hyperparameters.hp.BaseHParams

Hyperparameters used for training models.

A common set of hyperparameters shared by the training jobs of all model maker tasks.

learning_rate The learning rate to use for gradient descent training.
batch_size Batch size for training.
epochs Number of training iterations over the dataset.
steps_per_epoch An optional integer indicate the number of training steps per epoch. If not set, the training pipeline calculates the default steps per epoch as the training dataset size divided by batch size.
class_weights An optional mapping of indices to weights for weighting the loss function during training.
shuffle True if the dataset is shuffled before training.
repeat True if the training dataset is repeated infinitely to support training without checking the dataset size.
export_dir The location of the model checkpoint files.
distribution_strategy A string specifying which Distribution Strategy to use. Accepted values are 'off', 'one_device', 'mirrored', 'parameter_server', 'multi_worker_mirrored', and 'tpu' -- case insensitive. 'off' means not to use Distribution Strategy; 'tpu' means to use TPUStrategy using tpu_address. See the tf.distribute.Strategy documentation for more details: https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy.
num_gpus How many GPUs to use at each worker with the DistributionStrategies API. The default is 0.
tpu The TPU resource to be used for training. This should be either the name used when creating the Cloud TPU, a grpc://ip.address.of.tpu:8470 url, or an empty string if using a local TPU.

Methods

get_strategy

View source

__eq__

class_weights None
distribution_strategy 'off'
export_dir '/tmpfs/tmp/tmpnt_h4p9w'
num_gpus 0
repeat False
shuffle False
steps_per_epoch None
tpu ''