mp.calculators.tensor.inference_calculator_pb2.InferenceCalculatorOptions.Delegate.Gpu

A ProtocolMessage

allow_precision_loss bool allow_precision_loss
api Api api
cache_writing_behavior CacheWritingBehavior cache_writing_behavior
cached_kernel_path string cached_kernel_path
model_token string model_token
serialized_model_dir string serialized_model_dir
usage InferenceUsage usage
use_advanced_gpu_api bool use_advanced_gpu_api

ANY 0
Api Instance of google.protobuf.internal.enum_type_wrapper.EnumTypeWrapper
CacheWritingBehavior Instance of google.protobuf.internal.enum_type_wrapper.EnumTypeWrapper
FAST_SINGLE_ANSWER 1
InferenceUsage Instance of google.protobuf.internal.enum_type_wrapper.EnumTypeWrapper
NO_WRITE 0
OPENCL 2
OPENGL 1
SUSTAINED_SPEED 2
TRY_WRITE 1
UNSPECIFIED 0
WRITE_OR_ERROR 2