Best Practices and Limitations

Consider these guidelines when using BatchJobService.

Improve throughput

  • Fewer larger jobs is preferred over many smaller jobs.

  • Order uploaded operations by operation type. For example, if your job contains operations to add campaigns, ad groups, and ad group criteria, order the operations in your upload so that all of the campaign operations are first, followed by all of the ad group operations, and finally all ad group criterion operations.

  • Within operations of the same type, it can improve performance to group them by parent resource. For example, if you have a series of AdGroupCriterionOperation objects, it can be more efficient to group operations by ad group, rather than intermixing operations that affect ad group criteria in different ad groups.

Avoid concurrency issues

  • When submitting multiple concurrent jobs for the same account, try to reduce the likelihood of jobs operating on the same objects at the same time, while maintaining large job sizes. Many unfinished jobs (with status of RUNNING) that try to mutate the same set of objects can lead to deadlock-like conditions resulting in severe slow-down and even job failures.

  • Don't submit multiple operations that mutate the same object in the same job, as the result can be unpredictable.

Retrieve results optimally

  • Don't poll the job status too frequently or you risk hitting rate limit errors.

  • Don't retrieve more than 1,000 results per page. The server could return fewer than that due to load or other factors.

  • The results order will be the same as the upload order.

Additional usage guidance

  • You can set an upper bound for how long a batch job is allowed to run before being cancelled. When creating a new batch job, set the metadata.execution_limit_seconds field to your preferred time limit, in seconds. There is no default time limit if metadata.execution_limit_seconds is not set.

  • It is recommended to add no more than 1,000 operations per AddBatchJobOperationsRequest and use the sequence_token to upload the rest of the operations to the same job. Depending on the content of the operations, too many operations in a single AddBatchJobOperationsRequest could cause a REQUEST_TOO_LARGE error. You can handle this error by reducing the number of operations and retrying the AddBatchJobOperationsRequest.


  • Each BatchJob supports up to one million operations.

  • Each account can have up to 100 active or pending jobs at the same time.

  • Pending jobs older than 7 days are automatically removed.

  • Each AddBatchJobOperationsRequest has a maximum size of 10,484,504 bytes. If you exceed this, you will receive an INTERNAL_ERROR. You can determine the size of the request before submitting and take appropriate action if it is too large.


    static final int MAX_REQUEST_BYTES = 10_484_504;
    ... (code to get the request object)
    int sizeInBytes = request.getSerializedSize();


    from import GoogleAdsClient
    MAX_REQUEST_BYTES = 10484504
    ... (code to get the request object)
    size_in_bytes = request._pb.ByteSize()


    require 'google/ads/google_ads'
    MAX_REQUEST_BYTES = 10484504
    ... (code to get the request object)
    size_in_bytes = request.to_proto.bytesize


    use Google\Ads\GoogleAds\V16\Resources\Campaign;
    const MAX_REQUEST_BYTES = 10484504;
    ... (code to get the request object)
    $size_in_bytes = $campaign->byteSize() . PHP_EOL;


    using Google.Protobuf;
    const int MAX_REQUEST_BYTES = 10484504;
    ... (code to get the request object)
    int sizeInBytes = request.ToByteArray().Length;


    use Devel::Size qw(total_size);
    use constant MAX_REQUEST_BYTES => 10484504;
    ... (code to get the request object)
    my $size_in_bytes = total_size($request);