Attribution Reporting for Mobile overview

Recent updates

  • Updated the list of upcoming changes and known issues
  • Introduced lite flexible event-level configuration, as a bridge to the full flexible event-level configuration
  • Starting in September 2023, registerWebSource, registerWebTrigger, registerAppSource and registerAppTrigger must use strings for fields that expect a numeric value (such as priority, source_event_id, debug_key, trigger_data, deduplication_key, etc.)
  • In Q4 2023, Google Cloud support in the Android Attribution Reporting API will be added to generate summary reports using Aggregation Service on Google Cloud, more specific timing reflected here. See the deployment guide for more information on setting up Aggregation Service with Google Cloud.
  • New privacy preserving rate limits for number of unique destinations.
  • Updated functionality for lookback window trigger filters will be coming in Q1 2024, see the note for further information.

Overview

Today, it's common for mobile attribution and measurement solutions to use cross-party identifiers, such as Advertising ID. The Attribution Reporting API is designed to provide improved user privacy by removing reliance on cross-party user identifiers, and to support key use cases for attribution and conversion measurement across apps and the web.

This API has the following structural mechanisms that offer a framework for improving privacy, which later sections on this page describe in more detail:

The preceding mechanisms limit the ability to link user identity across two different apps or domains.

The Attribution Reporting API supports the following use cases:

  • Conversion reporting: Help advertisers measure the performance of their campaigns by showing them conversion (trigger) counts and conversion (trigger) values across various dimensions, such as by campaign, ad group, and ad creative.
  • Optimization: Provide event-level reports that support optimization of ad spend, by providing per-impression attribution data that can be used to train ML models.
  • Invalid activity detection: Provide reports that can be used in invalid traffic and ad fraud detection and analysis.

At a high level, the Attribution Reporting API works as follows, which later sections of this document describe in more detail:

  1. The ad tech completes an enrollment process to use the Attribution Reporting API.
  2. The ad tech registers attribution sources—ad clicks or views—with the Attribution Reporting API.
  3. The ad tech registers triggers—user conversions on the advertiser app or website—with the Attribution Reporting API.
  4. The Attribution Reporting API matches triggers to attribution sources—a conversion attribution—and one or more triggers are sent off-device through event-level and aggregatable reports to ad techs.

Get access to Attribution Reporting APIs

Ad tech platforms need to enroll to access the Attribution Reporting APIs, see Enroll for a Privacy Sandbox account for more information.

Register an attribution source (click or view)

The Attribution Reporting API refers to ad clicks and views as attribution sources. To register an ad click or ad view, call registerSource(). This API expects the following parameters:

  • Attribution source URI: The platform issues a request to this URI in order to fetch metadata associated with the attribution source.
  • Input event: Either an InputEvent object (for a click event) or null (for a view event).

When the API makes a request to the Attribution Source URI, the ad tech should respond with the attribution source metadata in a new HTTP header Attribution-Reporting-Register-Source, with the following fields:

  • Source event ID: This value represents the event-level data associated with this attribution source (ad click or view). Must be a 64-bit unsigned integer formatted as a string.
  • Destination: An origin whose eTLD+1 or app package name where the trigger event happens.
  • Expiry (optional): Expiry, in seconds, for when the source should be deleted off the device. Default is 30 days, with a minimum value of 1 day and a maximum value of 30 days. This is rounded to the nearest day. Can be formatted as either a 64-bit unsigned integer or string.
  • Event report window (optional): Duration in seconds after source registration during which event reports may be created for this source. If the event report window has passed, but the expiry has not yet passed, a trigger can still be matched with a source, but an event report is not sent for that trigger. Cannot be greater than Expiry. Can be formatted as either a 64-bit unsigned integer or string.
  • Aggregatable report window (optional): Duration in seconds after source registration during which aggregatable reports may be created for this source. Cannot be greater than Expiry. Can be formatted as either a 64-bit unsigned integer or string.
  • Source priority (optional): Used to select which attribution source a given trigger should be associated with, in case multiple attribution sources could be associated with the trigger. Must be a 64-bit signed integer formatted as a string.

    When a trigger is received, the API finds the matching attribution source with the highest source priority value and generates a report. Each ad tech platform can define its own prioritization strategy. For more details on how priority impacts attribution, see the prioritization example section.

    Higher values indicate higher priorities.
  • Install and post-install attribution windows (optional): Used to determine attribution for post-install events, described later on this page.
  • Filter data (optional): Used to selectively filter some triggers, effectively ignoring them. For more details, see the trigger filters section on this page.
  • Aggregation keys (optional): Specify segmentation to be used for aggregatable reports.

Optionally, the attribution source metadata response may include additional data in the Attribution reporting redirects header. The data contains redirect URLs, which allow multiple ad techs to register a request.

The developer guide includes examples that show how to accept source registration.

The following steps show an example workflow:

  1. The ad tech SDK calls the API to initiate attribution source registration, specifying a URI for the API to call:

    registerSource(
        Uri.parse("https://adtech.example/attribution_source?AD_TECH_PROVIDED_METADATA"),
        myClickEvent);
    
  2. The API makes a request to https://adtech.example/attribution_source?AD_TECH_PROVIDED_METADATA, using one of the following headers:

    <!-- For click events -->
    Attribution-Reporting-Source-Info: navigation
    
    <!-- For view events -->
    Attribution-Reporting-Source-Info: event
    
  3. This ad tech's HTTPS server replies with headers containing the following:

    Attribution-Reporting-Register-Source: {
        "destination": "android-app://com.advertiser.example",
        "source_event_id": "234",
        "expiry": "60000",
        "priority": "5"
    }
    Attribution-Reporting-Redirect:
    https://adtechpartner1.example?their_ad_click_id=567
    Attribution-Reporting-Redirect:
    https://adtechpartner2.example?their_ad_click_id=890
    
  4. The API makes a request to each URL specified in Attribution-Reporting-Redirect. In this example, two ad tech partner URLs are specified, so the API makes one request to https://adtechpartner1.example?their_ad_click_id=567 and another request to https://adtechpartner2.example?their_ad_click_id=890.

  5. This ad tech's HTTPS server replies with headers containing the following:

    Attribution-Reporting-Register-Source: {
        "destination": "android-app://com.advertiser.example",
        "source_event_id": "789",
        "expiry": "120000",
        "priority": "2"
    }
    

Three navigation (click) attribution sources are registered based on the requests shown in the previous steps.

Register an attribution source from WebView

WebView supports the use case where an app is rendering an ad within a WebView. This is handled by WebView directly calling registerSource() as a background request. This call associates the attribution source to the app instead of the top-level origin. Registering sources from embedded web content within a browser context is also supported; both API callers and apps need to adjust settings to do so. See Register attribution source and trigger from WebView for instructions for API callers and Attribution source and trigger registration from WebView for instructions for apps.

Since ad techs use common code across Web and WebView, WebView follows HTTP 302 redirects and passes on the valid registrations to the platform. We don't plan to support the Attribution-Reporting-Redirect header for this scenario, but reach out if you have an impacted use case.

Register a trigger (conversion)

Ad tech platforms can register triggers—conversions such as installs or post-install events—using the registerTrigger() method.

The registerTrigger() method expects the Trigger URI parameter. The API issues a request to this URI to fetch metadata associated with the trigger.

The API follows redirects. The ad tech server response should include an HTTP header called Attribution-Reporting-Register-Trigger, which represents information on one or more registered triggers. The header's content should be JSON-encoded and include the following fields:

  • Trigger data: Data to identify the trigger event (3 bits for clicks, 1 bit for views). Must be a 64-bit signed integer formatted as a string.
  • Trigger priority (optional): Represents the priority of this trigger compared to other triggers for the same attribution source. Must be a 64-bit signed integer formatted as a string. For more details on how priority impacts reporting, see the prioritization section section.
  • Deduplication key (optional): Used to identify cases where the same trigger is registered multiple times by the same ad tech platform, for the same attribution source. Must be a 64-bit signed integer formatted as a string.
  • Aggregation keys (optional): A list of dictionaries which specifies aggregation keys and which aggregatable reports should have their value aggregated.
  • Aggregation values (optional): A list of amounts of value which contribute to each key.
  • Filters (optional): Used to selectively filter triggers or trigger data. For more details, see the trigger filters section on this page.

Optionally, the ad tech server response may include additional data in the Attribution Reporting Redirects header. The data contains redirect URLs, which allow multiple ad techs to register a request.

Multiple ad techs can register the same trigger event using either redirects in the Attribution-Reporting-Redirect field or multiple calls to the registerTrigger() method. We recommend that you use the deduplication key field to avoid including duplicate triggers in reports in the case that the same ad tech provides multiple responses for the same trigger event. Learn more about how and when to use a deduplication key.

The developer guide includes examples that show how to accept trigger registration.

The following steps show an example workflow:

  1. The Ad tech SDK calls the API to initiate trigger registration using a pre-enrolled URI. See Enroll for a Privacy Sandbox account for more information.

    registerTrigger(
        Uri.parse("https://adtech.example/attribution_trigger?AD_TECH_PROVIDED_METADATA"));
    
  2. The API makes a request to https://adtech.example/attribution_trigger?AD_TECH_PROVIDED_METADATA.

  3. This ad tech's HTTPS server replies with headers containing the following:

    Attribution-Reporting-Register-Trigger: {
        "event_trigger_data": [{
        "trigger_data": "1122",
        // This returns 010 for click-through conversions (CTCs) and 0 for
        // view-through conversions (VTCs) in reports
        "priority": "3",
        "deduplication_key": "3344"
        }],
    }
    Attribution-Reporting-Redirect: https://adtechpartner.example?app_install=567
    
  4. The API makes a request to each URL specified in Attribution-Reporting-Redirect. In this example, only one URL is specified, so the API makes a request to https://adtechpartner.example?app_install=567.

  5. This ad tech's HTTPS server replies with headers containing the following:

    Attribution-Reporting-Register-Trigger: {
    "event_trigger_data":[{
      "trigger_data": "5566",
      "priority": "3",
      "deduplication_key": "3344"
    }]
    }
    

    Two triggers are registered based on the requests in the previous steps.

Attribution capabilities

The following sections explain how the Attribution Reporting API matches conversion triggers to attribution sources.

Source-prioritized attribution algorithm applied

The Attribution Reporting API employs a source-prioritized attribution algorithm to match a trigger (conversion) to an attribution source.

Priority parameters provide ways to customize the attribution of triggers to sources:

  • You can attribute triggers to certain ad events over others. For example, you may choose to place more emphasis on clicks rather than views, or focus on events from certain campaigns.
  • You can configure the attribution source and trigger such that, if you hit rate limits, you're more likely to receive the reports that are more important to you. For example, you might want to make sure that biddable conversions or high-value conversions are more likely to appear in these reports.

In the case where multiple ad techs register an attribution source, as described later on this page, this attribution happens independently for each ad tech. For each ad tech, the attribution source with the highest priority is attributed with the trigger event. If there are multiple attribution sources with the same priority, the API picks the last registered attribution source. Any other attribution sources, that aren't picked are discarded and are no longer eligible for future trigger attribution.

Trigger filters

Source and trigger registration includes additional optional functionality to do the following:

  • Selectively filter some triggers, effectively ignoring them.
  • Choose trigger data for event-level reports based on source data.
  • Choose to exclude a trigger from event-level reports.

To selectively filter triggers, the ad tech can specify filter data, consisting of keys and values, during source and trigger registration. If the same key is specified for both the source and trigger, then the trigger is ignored if the intersection is empty. For example, a source can specify "product": ["1234"], where product is the filter key and 1234 is the value. If the trigger filter is set to "product": ["1111"], then the trigger is ignored. If there is no trigger filter key matching product, then the filters are ignored.

Another scenario where ad tech platforms may want to selectively filter triggers is to force a shorter expiry window. On trigger registration, an ad tech can specify (in seconds) a lookback window from when the conversion happened; for example, a 7 day lookback window would be defined as: "_lookback_window": 604800 // 7d

To decide if a filter matches, the API will first check the lookback window. If available, the duration since the source was registered must be smaller or equal to the lookback window duration.

Ad tech platforms can also choose trigger data based on source event data. For example, source_type is automatically generated by the API as either navigation or event. During trigger registration, trigger_data can be set as one value for "source_type": ["navigation"] and as a different value for "source_type": ["event"].

Triggers are excluded from event-level reports if any of the following are true:

  • There is no trigger_data specified.
  • Source and trigger specify the same filter key, but the values don't match. Note that, in this case, the trigger is ignored for both event-level and aggregatable reports.

Post-install attribution

In some cases, there is a need for post-install triggers to be attributed to the same attribution source that drove the install, even if there are other eligible attribution sources that occurred more recently.

The API can support this use case by allowing ad techs to set a post-install attribution period:

  • When registering an attribution source, specify an install attribution window during which installs are expected (generally 2-7 days, accepted range 1 to 30 days). Specify this time window as a number of seconds.
  • When registering an attribution source, specify a post-install attribution exclusivity window where any post-install trigger events should be associated with the attribution source that drove the install (generally 7-30 days, accepted range 0 to 30 days). Specify this time window as a number of seconds.
  • The Attribution Reporting API validates when an app install happens and internally attributes the install to the source-prioritized attribution source. However, the install isn't sent to ad techs and doesn't count against the platforms' respective rate limits.
  • App install validation is available for any downloaded app.
  • Any future triggers that happen within the post-install attribution window are attributed to the same attribution source as the validated install, as long as that attribution source is eligible.

In the future, we might explore extending the design to support more advanced attribution models.

The following table shows an example of how ad techs may use post-install attribution. Assume all attribution sources and triggers are being registered by the same ad tech network, and all priorities are the same.

Event Day when event occurs Notes
Click 1 1 install_attribution_window is set to 172800 (2 days), and post_install_exclusivity_window is set to 864000 (10 days)
Verified Install 2 The API internally attributes verified installs, but those installs aren't considered triggers. Therefore, no reports are sent at this point.
Trigger 1 (First Open) 2 First trigger registered by ad tech. In this example, it represents a first open but can be any trigger type.
Attributed to click 1 (matches attribution of verified install).
Click 2 4 Uses the same values for install_attribution_window and post_install_exclusivity_window as Click 1
Trigger 2 (Post Install) 5 Second trigger registered by ad tech. In this example, it represents a post-install conversion like a purchase.
Attributed to click 1 (matches attribution of verified install).
Click 2 is discarded and isn't eligible for future attribution.

The following list provides some additional notes regarding post-install attribution:

  • If the verified install doesn't happen within the number of days specified by install_attribution_window, post-install attribution isn't applied.
  • Verified installs aren't registered by ad techs and aren't sent out in reports. They don't count against an ad tech's rate limits. Verified installs are only used to identify the attribution source that is credited with the install.
  • In the example from the preceding table, trigger 1 and trigger 2 represent a first open and a post-install conversion, respectively. However, ad tech platforms can register any type of trigger. In other words, the first trigger need not be a first open trigger.
  • If more triggers are registered after the post_install_exclusivity_window expires, click 1 is still eligible for attribution, assuming it hasn't expired and hasn't reached its rate limits.
    • Click 1 may still lose, or be discarded, if a higher-priority attribution source is registered.
  • If the advertiser app is uninstalled and reinstalled, the reinstall is counted as a new verified install.
  • If click 1 was a view event instead, both the "first open" and post-install triggers are still attributed to it. The API restricts attribution to one trigger per view, except in the case of post-install attribution where up to two triggers per view are allowed. In the post-install attribution case, the ad tech could receive 2 different reporting windows (at 2 days or at source expiry).

All combinations of app- and web-based trigger paths are supported

The Attribution Reporting API enables attribution of the following trigger paths on a single Android device:

  • App-to-app: The user sees an ad in an app, then converts in either that app or another installed app.
  • App-to-web: The user sees an ad in an app, then converts in a mobile or app browser.
  • Web-to-app: The user sees an ad in a mobile or app browser, then converts in an app.
  • Web-to-web: The user sees an ad in a mobile or app browser, then converts in either the same browser or another browser on the same device.

We allow web browsers to support new web-exposed functionality, such as functionality that's similar to the Privacy Sandbox for the Web's Attribution Reporting API, which can call the Android APIs to enable attribution across app and web.

Learn about the changes that ad techs and apps need to make in order to support trigger paths for cross app and web measurement.

Prioritize multiple triggers for a single attribution source

A single attribution source can lead to multiple triggers. For example, a purchase flow could involve an "app install" trigger, one or more "add-to-cart" triggers, and a "purchase" trigger. Each trigger is attributed to one or more attribution sources according to the source-prioritized attribution algorithm, described later on this page.

There are limits on how many triggers can be attributed to a single attribution source; for more details, read the section on viewing measurement data in attribution reports later on this page. In the cases where there are multiple triggers beyond these limits, it's useful to introduce prioritization logic to get back the most valuable triggers. For example, the developers of an ad tech might want to prioritize getting "purchase" triggers over "add-to-cart" triggers.

To support this logic, a separate priority field can be set on the trigger, and the highest priority triggers are picked before limits are applied, within a given reporting window.

Allow multiple ad techs to register attribution sources or triggers

It's common for more than one ad tech to receive attribution reports, generally to perform cross-network deduplication. Therefore, the API allows multiple ad techs to register the same attribution source or trigger. An ad tech must register both attribution sources and triggers to receive postbacks from the API, and attribution is done among the attribution sources and triggers that the ad tech has registered with the API.

Advertisers that want to use a third party to perform cross-network deduplication can continue doing so, using a technique similar to the following:

  • Setting up an in-house server to register and receive reports from the API.
  • Continuing to use an existing mobile measurement partner.

Attribution sources

Attribution source redirects are supported in the registerSource() method:

  1. The ad tech that calls the registerSource() method can provide an additional Attribution-Reporting-Redirect field in their response, which represents the set of partner ad tech's redirect URLs.
  2. The API then calls the redirect URLs so the attribution source can be registered by the partner ad techs.

Multiple partner ad tech URLs can be listed in the Attribution-Reporting-Redirect field, and partner ad techs cannot specify their own Attribution-Reporting-Redirect field.

The API also allows different ad techs to each call registerSource().

Triggers

For trigger registration, third parties are supported in a similar way: ad techs can either use the additional Attribution-Reporting-Redirect field, or they can each call the registerTrigger() method.

When an advertiser uses multiple ad techs to register the same trigger event, a deduplication key should be used. The deduplication key serves to disambiguate these repeated reports of the same event registered by the same ad tech platform. For example, an ad tech could have their SDK call the API directly to register a trigger and have their URL in the redirect field of another ad tech's call. If no deduplication key is provided, duplicate triggers may be reported back to each ad tech as unique.

Handle duplicate triggers

An ad tech may register the same trigger multiple times with the API. Scenarios include the following:

  • The user performs the same action (trigger) multiple times. For example, the user browses the same product multiple times in the same reporting window.
  • The advertiser app uses multiple SDKs for conversion measurement, which all redirect to the same ad tech. For example, the advertiser app uses two measurement partners, MMP #1 and MMP #2. Both MMPs redirect to ad tech #3. When a trigger happens, both MMPs register that trigger with the Attribution Reporting API. Ad tech #3 then receives two separate redirects—one from MMP #1 and one from MMP #2—for the same trigger.

In these cases, there are several ways to suppress event-level reports on duplicate triggers, to make it less likely to exceed the rate limits applied to event-level reports. The recommended way is to use a deduplication key.

Recommended method: deduplication key

The recommended method is for the advertiser app to pass a unique deduplication key to any ad techs or SDKs that it's using for conversion measurement. When a conversion happens, the app passes a deduplication key to the ad techs or SDKs. Those ad techs or SDKs then continue passing the deduplication key to redirects using a parameter in the URLs specified in Attribution-Reporting-Redirect.

Ad techs can choose to register only the first trigger with a given deduplication key, or can choose to register multiple triggers or all triggers. Ad techs can specify the deduplication_key when registering duplicate triggers.

If an ad tech registers multiple triggers with the same deduplication key and attributed source, only the first registered trigger is sent in the event-level reports. Duplicate triggers are still sent in encrypted aggregatable reports.

Alternate method: ad techs agree on per-advertiser trigger types

In situations where ad techs do not wish to use the deduplication key, or where the advertiser app cannot pass a deduplication key, an alternate option exists. All ad techs measuring conversions for a given advertiser need to work together to define different trigger types for each advertiser.

Ad techs that initiate the trigger registration call—for example, SDKs—include a parameter in URLs specified in Attribution-Reporting-Redirect, such as duplicate_trigger_id. That duplicate_trigger_id parameter can include information like the SDK name and the trigger type for that advertiser. Ad techs can then send a subset of these duplicate triggers to event-level reports. Ad techs can also include this duplicate_trigger_id in their aggregation keys.

Cross-network attribution example

In the example described in this section, the advertiser is using two serving ad tech platforms (Ad tech A and Ad tech B) and one measurement partner (MMP).

To start, Ad tech A, Ad tech B, and MMP must each complete enrollment to use the Attribution Reporting API. See Enroll for a Privacy Sandbox account for more information.

The following list provides a hypothetical series of user actions that each occur one day apart, and how the Attribution Reporting API handles those actions with respect to Ad tech A, Ad tech B, and MMP:

Day 1: User clicks on an ad served by Ad tech A

Ad tech A calls registerSource() with their URI. The API makes a request to the URI, and the click is registered with the metadata from Ad tech A's server response.

Ad tech A also includes MMP's URI in the Attribution-Reporting-Redirect header. The API makes a request to MMP's URI, and the click is registered with the metadata from MMP's server response.

Day 2: User clicks on an ad served by Ad tech B

Ad tech B calls registerSource() with their URI. The API makes a request to the URI, and the click is registered with the metadata from Ad tech B's server response.

Like Ad tech A, Ad tech B has also included MMP's URI in the Attribution-Reporting-Redirect header. The API makes a request to MMP's URI, and the click is registered with the metadata from the MMP's server response.

Day 3: User views an ad served by Ad tech A

The API responds in the same way that it did on Day 1, except that a view is registered for Ad tech A and MMP.

Day 4: User installs the app, which uses the MMP for conversion measurement

MMP calls registerTrigger() with their URI. The API makes a request to the URL, and the conversion is registered with the metadata from MMP's server response.

MMP also includes the URIs for Ad tech A and Ad tech B in the Attribution-Reporting-Redirect header. The API makes requests to Ad tech A and Ad tech B's servers, and the conversion is registered accordingly with the metadata from the server responses.

The following diagram illustrates the process described in the preceding list:

Example of how the Attribution Reporting API responds to a series of user actions.

Attribution works as follows:

  • Ad tech A sets the priority of clicks higher than views and therefore gets the install attributed to the click on Day 1.
  • Ad tech B gets the install attributed on Day 2.
  • MMP sets the priority of clicks higher than views and gets the install attributed to the click on Day 2. Day 2's click is the highest priority, most recent ad event.

Cross-network attribution without redirects

While we recommend utilizing redirects to allow multiple ad techs to register attribution sources and triggers, we recognize that there may be scenarios where using redirects isn't feasible. This section will detail how to support cross-network attribution without redirects.

High level flow

  1. On source registration, the serving ad tech network shares their source aggregation keys.
  2. On trigger registration, the advertiser or measurement partner chooses which source-side key pieces to use and specifies their attribution configuration.
  3. Attribution is based on the attribution config, shared keys, and any sources that were actually registered by that advertiser or measurement partner (e.g. from another serving ad tech network that has enabled redirects).
  4. If the trigger is attributed to a source from a non-redirecting serving ad tech, the advertiser or measurement partner can receive an aggregatable report that combines the source and trigger key pieces defined in step #2.

Source registration

On source registration, the serving ad tech network can choose to share their source aggregation keys or a subset of their source aggregation keys instead of redirecting. The serving ad tech is not required to actually use these source keys in their own aggregatable reports and can declare them only on behalf of the advertiser or measurement partner if needed.

Shared aggregation keys are available to any ad tech that registers a trigger for the same advertiser. However, it is up to the serving ad tech and the trigger measurement ad tech to collaborate on what types of aggregation keys are needed, their names, and how to decode the keys into readable dimensions.

Trigger registration

On trigger registration, the measurement ad tech chooses which source-side key pieces to apply to each trigger key piece, including any shared by serving ad techs.

Additionally, the measurement ad tech must also specify their waterfall attribution logic using a new attribution configuration API call. In this config, the ad tech can specify source priority, expiry, and filters for sources that they had no visibility into (for example, sources that did not use a redirect).

Attribution

The Attribution Reporting API performs source-prioritized, last-touch attribution for the measurement ad tech based on their attribution config, shared keys, and any sources they registered. For example:

  • The user clicked on ads served by ad techs A, B, C, and D. The user then installed the advertiser's app, which uses a measurement ad tech partner (MMP).
  • Ad tech A redirects its sources to the MMP.
  • Ad techs B and C do not redirect, but share their aggregation keys.
  • Ad tech D neither redirects nor shares aggregation keys.

The MMP registers a source from Ad tech A, and defines an attribution config that includes Ad tech B and Ad tech D.

Attribution for the MMP now includes:

  • Ad tech A, since the MMP registered a source from that ad tech's redirect.
  • Ad tech B, since Ad tech B shared keys and the MMP included it in their attribution config.

Attribution for the MMP does not include:

  • Ad tech C, since the MMP did not include it in their attribution config.
  • Ad tech D, since they did not redirect nor share aggregation keys.

Debugging

To support debugging for cross-network attribution without redirects, an additional field, shared_debug_key, is available for ad techs to set upon source registration. If set on the original source registration, it will also be set on the corresponding derived source as debug_key during trigger registration for cross-network attribution without redirects. This debug key is attached as source_debug_key in event and aggregate reports.

This debug feature will only be supported for cross-network attribution without redirects under the following scenarios:

  • App to app measurement where AdId is permitted
  • App to web measurement where AdId is permitted and matching across both the app source and the web trigger
  • Web to web measurement (on the same browser app) when ar_debug` is present on both source and trigger

Key discovery for cross-network attribution without redirects

Key discovery is intended to streamline how ad techs (usually MMPs) implement their attribution config for the purposes of cross-network attribution when one or several serving ad techs are using shared aggregation keys (as described in Cross-network attribution without redirects above).

When an MMP queries the Aggregation Service to generate summary reports for campaigns that include derived sources, Aggregation Service requires the MMP to specify the list of possible keys as input for the aggregation job. In some cases, the list of potential source aggregation keys may be very large, or unknown. Large lists of possible keys are challenging to track, and are also likely to be quite complex and costly to process. Consider the following examples:

  • List of all possible keys is large:
    • A serving ad network is executing a complex user acquisition initiative that includes 20 campaigns, each with 10 ad groups, and each ad group with 5 creatives that are refreshed every week based on performance.
  • List of all possible keys is unknown:
    • A serving ad network is serving ads across many mobile apps where the full list of publisher app IDs is not known at campaign launch.
    • An advertiser is working across multiple serving ad networks that are not redirecting to the MMP on source registration; each serving ad network has a different key structure and values, which may not be shared in advance with the MMP.

With the introduction of key discovery:

  • The Aggregation Service no longer requires a full enumeration of possible aggregation keys.
  • Instead of having to specify the full list of possible keys, an MMP can create an empty (or partially empty) set of keys and set a threshold, so that only (non pre-declared) keys with values exceeding the threshold are included in the output.
  • MMP receives a summary report that includes noisy values for keys that have contributing values above the set threshold. The report may also include keys that have no associated real user contributions and are purely noised.
  • MMP uses the x_network_bit_mapping field in trigger registration to determine which ad tech corresponds to which key.
  • MMP can then contact the appropriate serving ad tech to understand the values in the source key.

In summary, key discovery enables MMPs to obtain aggregation keys without knowing them in advance, and avoid processing a large volume of source keys at the expense of added noise.

Daisy chain redirects

By providing multiple Attribution-Reporting-Redirect headers in a source or trigger registration HTTPS server-response, an ad tech can use the Attribution Reporting API to perform multiple source and trigger registrations with a single registration API call.

In the server-response, the ad tech can also include a single Location (302 redirect) header with a URL, which in turn leads to another registration, up to a set limit.

Both types of headers are optional and none can be provided if redirects are not needed. Either one or both types of headers may be provided. Source and trigger registration requests (including redirects) are retried in case of network failure. The number of retries per request is limited to a fixed number to avoid significant impact on the device.

Redirects are not accepted for registerWebSource and registerWebTrigger used by browsers. More details can be found in the Cross Web and App Implementation Guide.

View measurement data in attribution reports

The Attribution Reporting API enables the following types of reports, described in more detail later on this page:

  • Event-level reports associate a particular attribution source (click or view) with limited bits of high-fidelity trigger data.
  • Aggregatable reports aren't necessarily tied with a specific attribution source. These reports provide richer, higher-fidelity trigger data than event-level reports, but this data is only available in an aggregate form.

These two report types are complementary to each other and can be used simultaneously.

Event-level reports

After a trigger is attributed to an attribution source, an event-level report is generated and stored on the device until it can be sent back to each ad tech's postback URL during one of the time windows for sending reports, described in more detail later on this page.

Event-level reports are useful when very little information is needed about the trigger. Event-level trigger data is limited to 3 bits of trigger data for clicks—which means that a trigger can be assigned one of eight categories—and 1 bit for views. In addition, event-level reports don't support encoding of high-fidelity trigger-side data, such as a specific price or trigger time. Because attribution happens on device, there is no support for cross-device analytics in the event-level reports.

The event-level report contains data such as the following:

  • Destination: Advertiser app package name or eTLD+1 where the trigger happened
  • Attribution Source ID: The same attribution source ID that was used for registering an attribution source
  • Trigger type: 1 or 3 bits of low-fidelity trigger data, depending on the type of attribution source

Privacy-preserving mechanisms applied to all reports

The following limits are applied after priorities regarding attribution sources and triggers are taken into consideration.

Limits on number of ad techs

There are limits on the number of ad techs that can register or receive reports from the API, with a current proposal of the following:

  • 100 ad techs with attribution sources per {source app, destination app, 30 days, device}.
  • 10 ad techs with attributed triggers per {source app, destination app, 30 days, device}.
  • 20 ad techs can register a single attribution source or trigger (via Attribution-Reporting-Redirect)

Limits on number of unique destinations

These limits make it difficult for a set of ad techs to collude by querying a large number of apps to understand a given user's app usage behavior.

  • Across all registered sources, across all ad techs, the API supports no more than 200 unique destinations, per source app, per minute.
  • Across all registered sources, for a single ad tech, the API supports no more than 50 unique destinations, per source app, per minute. This limit prevents one ad tech from using up the entire budget from the previously mentioned rate limit.

Expired sources don't count towards the rate limits.

One reporting origin per source app per day

A given ad tech platform may only use one reporting origin to register sources on a publisher app, for a given device, on the same day. This rate limit prevents ad techs from using multiple reporting origins to access additional privacy budget.

Consider the following scenario, where a single ad tech wants to use multiple reporting origins to register sources on a publisher app, for a single device.

  1. Ad tech A's reporting origin 1 registers a source on App B
  2. 12 hours later, ad tech A's reporting origin 2 attempts to register a source on App B

The second source, for Ad tech A's reporting origin 2, would be rejected by the API. Ad tech A's reporting origin 2 wouldn't be able to successfully register a source on the same device on App B until the following day.

Cooldown and rate limits

To limit the amount of user identity leakage between a {source, destination} pair, the API throttles the amount of total information sent in a given time period for a user.

The current proposal is to limit each ad tech to 100 attributed triggers per {source app, destination app, 30 days, device}.

Number of unique destinations

The API limits the number of destinations that an ad tech can try to measure. The lower the limit, the harder it is for an ad tech to use the API to attempt to measure user browsing activity that isn't associated with ads being shown.

The current proposal is to limit each ad tech to 100 distinct destinations with non-expired sources per source app.

Privacy-preserving mechanisms applied to event-level reports

Limited fidelity of trigger data

The API provides 1 bit for view-through triggers and 3 bits for click-through triggers. Attribution sources continue to support the full 64 bits of metadata.

You should evaluate if and how to reduce the information expressed in triggers so they work with the limited number of bits available in event-level reports.

Framework for differential privacy noise

A goal of this API is to allow event-level measurement to satisfy local differential privacy requirements by using k-randomized responses to generate a noisy output for each source event.

Noise is applied on whether an attribution source event is reported truthfully. An attribution source is registered on the device with probability $ 1-p $ that the attribution source is registered as normal, and with probability $ p $ that the device randomly chooses among all possible output states of the API (including not reporting anything at all, or reporting multiple fake reports).

The k-randomized response is an algorithm that is epsilon differentially private if the following equation is satisfied:

\[ p = \frac{k}{k + e^ε - 1} \]

For low values of ε, the true output is protected by the k-randomized response mechanism. Exact noise parameters are works in progress and are subject to change based on feedback, with a current proposal of the following:

  • p=0.24% for navigation sources
  • p=0.00025% for event sources

Limits on available triggers (conversions)

There are limits on the number of triggers per attribution source, with a current proposal of the following:

  • 1-2 triggers for ad view attribution sources (2 triggers only available in the case of post-install attribution)
  • 3 triggers for click ad attribution sources

Specific time windows for sending reports (default behaviour)

Event-level reports for ad view attribution sources are sent 1 hour after the source expires. This expiry date can be configured, but it cannot be less than 1 day or more than 30 days. If two triggers are attributed to an ad view attribution source (via post-install attribution), event-level reports can be sent at the reporting window intervals specified as follows.

Event-level reports for ad click attribution sources cannot be configured and are sent before or when the source expires, at specified points in time relative to when the source was registered. The time between the attribution source and expiry is split into multiple reporting windows. Each reporting window has a deadline (from the attribution source time). At the end of each reporting window, the device collects all the triggers that have occurred since the previous reporting window and sends a scheduled report. The API supports the following reporting windows:

  • 2 days: The device collects all the triggers that occurred at most 2 days after the attribution source was registered. The report is sent 2 days and 1 hour after the attribution source is registered.
  • 7 days: The device collects all the triggers that occurred more than 2 days but no more than 7 days after the attribution source was registered. The report is sent 7 days and 1 hour after the attribution source is registered.
  • A custom length of time, defined by the "expiry" attribute of an attribution source. The report is sent 1 hour after the specified expiry time. This value cannot be less than 1 day or more than 30 days.

Flexible event-level configuration

The default configuration for event level reporting is what ad techs are advised to start using as they begin utility testing, but may not be ideal for all use cases. The Attribution Reporting API will support optional, and more flexible configurations so that ad techs have increased control over the structure of their event level reports and are able to maximize the utility of the data.

This additional flexibility will be introduced into the Attribution Reporting API in two phases:

  • Phase 1: Lite flexible event level configuration
    • This version provides a subset of the full features, and can be used independently of Phase 2.
  • Phase 2: Full version of flexible event level configuration

Phase 1 (Lite flexible event level) could be used to:

  • Vary the frequency of reports by specifying the number of reporting windows
  • Vary the number of attributions per source registration
  • Reduce the amount of total noise by decreasing the above parameters
  • Configure reporting windows rather than using the defaults

Phase 2 (Full flexible event level) could be used to do all of the capabilities in Phase 1 and:

  • Vary the trigger data cardinality in a report
  • Reduce the amount of total noise by decreasing the trigger data cardinality

Reducing one dimension of the default configuration allows the ad tech to increase another dimension. Alternatively, the total amount of noise in an event level report may be reduced by net decreasing the parameters mentioned above.

In addition to dynamically setting noise levels based on an ad tech's chosen configuration, we will have some parameter limits to avoid large computation costs and configurations with too many output states (where noise will increase considerably). Here is an example set of restrictions. Feedback is encouraged on the [design proposal][50]:

  • Maximum of 20 total reports, globally and per trigger_data
  • Maximum of 5 possible reporting windows per trigger_data
  • Maximum of 32 trigger data cardinality (not applicable for Phase 1: Lite Flexible Event Level)

As ad techs start using this feature, be advised that using extrema values may result in a large amount of noise, or failure to register if privacy levels are not met.

Aggregatable reports

Before using aggregatable reports, you must set up your cloud account and start receiving aggregatable reports.

Aggregatable reports provide higher-fidelity trigger data from the device more quickly, beyond what is offered for event-level reports. This higher-fidelity data can only be learned in aggregate, and isn't associated with a particular trigger or user. Aggregation keys are up to 128 bits, and this allows aggregatable reports to support reporting use cases such as the following:

  • Reports for trigger values, such as revenue
  • Handling more trigger types

In addition, aggregatable reports use the same source-prioritized attribution logic as event-level reports, but they support more conversions attributed to a click or view.

The overall design of how the Attribution Reporting API prepares and sends aggregatable reports, shown in the diagram, is as follows:

  1. The device sends encrypted aggregatable reports to the ad tech. In a production environment, ad techs can't use these reports directly.
  2. The ad tech sends a batch of aggregatable reports to the aggregation service for aggregation.
  3. The aggregation service reads a batch of aggregatable reports, decrypts and aggregates them.
  4. The final aggregates are sent back to the ad tech in a summary report.
Process that the Attribution Reporting API uses to prepare and send aggregatable reports.

Aggregatable reports contain the following data related to attribution sources:

  • Destination: The app's package name or eTLD+1 web URL where the trigger happened.
  • Date: The date when the event represented by the attribution source occurred.
  • Payload: Trigger values, collected as encrypted key/value pairs, which is used in the trusted aggregation service to compute aggregations.

Aggregation services

The following services provide aggregation functionality and help protect against inappropriate access of aggregation data.

These services are managed by different parties, which are described in more detail later on this page:

  • The aggregation service is the only one that ad techs are expected to deploy.
  • The key management and aggregatable report accounting services are run by trusted parties called coordinators. These coordinators attest that the code running the aggregation service is the publicly-available code provided by Google and that all aggregation service users have the same key and aggregatable report accounting services applied to them.
Aggregation service

Ad tech platforms must, in advance, deploy an aggregation service that's based on binaries provided by Google.

This aggregation service operates in a Trusted Execution Environment (TEE) hosted in the cloud. A TEE offers the following security benefits:

  • It ensures that the code operating in the TEE is the specific binary offered by Google. Unless this condition is satisfied, the aggregation service can't access the decryption keys it needs to operate.
  • It offers security around the running process, isolating it from external monitoring or tampering.

These security benefits make it safer for an aggregation service to perform sensitive operations, such as accessing encrypted data.

For more information on the design, workflow, and security considerations of the aggregation service, see the aggregation service document on GitHub.

Key management service

This service verifies that an aggregation service is running an approved version of the binary and then provides the aggregation service in the ad tech with the correct decryption keys for their trigger data.

Aggregatable report accounting

This service tracks how often an ad tech's aggregation service accesses a specific trigger—which can contain multiple aggregation keys—and limits access to the appropriate number of decryptions. Refer to the Aggregation Service for the Attribution Reporting API design proposal for details.

Aggregatable Reports API

The API for creating contributions to aggregatable reports uses the same base API as when registering an attribution source for event-level reports. The following sections describe the extensions of the API.

Register the aggregatable source data

When the API makes a request to the Attribution Source URI, the ad tech can register a list of aggregation keys, named histogram_contributions, by responding with a new field called aggregation_keys in HTTP header Attribution-Reporting-Register-Source, with key as the key_name and value as key_piece:

  • (Key) Key name: A string for the name of the key. Used as a join key to combine with trigger-side keys to form the final key.
  • (Value) Key piece: A bitstring value for the key.

The final histogram bucket key is fully defined at trigger time by performing a binary OR operation on these pieces and the trigger-side pieces.

Final keys are restricted to a maximum of 128 bits; keys longer than this are truncated. This means that hex strings in the JSON should be limited to at most 32 digits.

Learn more about how aggregation keys are structured and how you can configure aggregation keys.

In the following example, an ad tech uses the API to collect the following:

  • Aggregate conversion counts at a campaign level
  • Aggregate purchase values at a geo level
// This is where the Attribution-Reporting-Register-Source object appears when
// an ad tech registers an attribution source.

// Attribution source metadata specifying histogram contributions in aggregate report.
Attribution-Reporting-Register-Source:
…
aggregation_keys: {
  // Generates a "0x159" key piece named (low order bits of the key) for the key
  // named "campaignCounts".
  // User saw an ad from campaign 345 (out of 511).

  "campaignCounts": "0x159",
  // Generates a "0x5" key piece (low order bits of the key) for the key name "geoValue"
  // Source-side geo region = 5 (US), out of a possible ~100 regions.
  "geoValue": "0x5"
}

Register the aggregatable trigger

Trigger registration includes two additional fields.

The first field is used to register a list of aggregate keys on the trigger side. The ad tech should respond back with the aggregatable_trigger_data field in HTTP header Attribution-Reporting-Register-Trigger, with the following fields for each aggregate key in the list:

  • Key piece: A bitstring value for the key.
  • Source keys: A list of strings with the names of attribution source side keys that the trigger key should be combined with to form the final keys.

The second field is used to register a list of values which should contribute to each key. The ad tech should respond back with the aggregatable_values field in the HTTP header Attribution-Reporting-Register-Trigger. The second field is used to register a list of values which should contribute to each key, which can be integers in the range $ [1, 2^{16}] $.

Each trigger can make multiple contributions to the aggregatable reports. The total amount of contributions to any given source event is bound by an $ L1 $ parameter, which is the maximum sum of contributions (values) across all aggregate keys for a given source. $ L1 $ refers to the L1 sensitivity or norm of the histogram contributions per source event. Exceeding these limits causes future contributions to silently drop. The initial proposal is to set $ L1 $ to $ 2^{16} $ (65536).

The noise in the aggregation service is scaled in proportion to this parameter. Given this, it is recommended to appropriately scale the values reported for a given aggregate key, based on the portion of $ L1 $ budget allocated to it. This approach helps ensure that the aggregate reports retain the highest possible fidelity when noise is applied. This mechanism is highly flexible and can support many aggregation strategies.

In the following example, the privacy budget is split equally between campaignCounts and geoValue by splitting the $ L1 $ contribution to each:

// This is where the Attribution-Reporting-Register-Trigger object appears
// when an ad tech registers a conversion trigger.

// Specify a list of dictionaries that generates aggregation keys.
Attribution-Reporting-Register-Trigger:{
    …
    "aggregatable_trigger_data":

    [
    // Each dictionary independently adds pieces to multiple source keys.
    {
    // Conversion type purchase = 2 at a 9-bit offset, i.e. 2 << 9.
    // A 9-bit offset is needed because there are 511 possible campaigns, which
    // will take up 9 bits in the resulting key.
        "key_piece": "0x400",// Conversion type purchase = 2
        // Apply this key piece to:
        "source_keys": ["campaignCounts"]
    },
    {
    // Purchase category shirts = 21 at a 7-bit offset, i.e. 21 << 7.
    // A 7-bit offset is needed because there are ~100 regions for the geo key,
    // which will take up 7 bits of space in the resulting key.
        "key_piece": "0xA80",
        // Apply this key piece to:
        "source_keys": ["geoValue", "nonMatchingIdsListedHereAreIgnored"]
    }
    ]

    // Specify an amount of an abstract value which can be integers in [1, 2^16] to
    // contribute to each key that is attached to aggregation keys in the order that
    // they're generated.
    aggregatable_values:
    {
    // Privacy budget for each key is L1 / 2 = 2^15 (32768).
    // Conversion count was 1.
    // Scale the count to use the full budget allocated: 1 * 32768 = 32768.
        "campaignCounts": 32768,

    // Purchase price was $52.
    // Purchase values for the app range from $1 to $1,024 (integers only).
    // Scaling factor applied is 32768 / 1024 = 32.
    // For $52 purchase, scale the value by 32 ($52 * 32 = $1,664).
        "geoValue": 1664
    }
}

The preceding example generates the following histogram contributions:

[
  // campaignCounts:
  {
    "key": "0x559", // = 0x159 | 0x400
    "value": 32768
  },
  // geoValue:
  {
    "key": "0xA85",  // = 0x5 | 0xA80
    "value": 1664
  }
]

The scaling factors can be inverted in order to obtain the correct values, modulo noise that is applied:

L1 = 65536
trueCampaignCounts = campaignCounts / (L1 / 2)
trueGeoValue = geoValue / (L1 / 2) * 1024

Differential privacy

A goal of this API is to have a framework which can support differentially private aggregate measurement. This can be achieved by adding noise proportional to the $ L1 $ budget, such as picking noise with the following distribution:

\[ Laplace(\frac{L1}{ε}) \]

Protected Audience API and Attribution Reporting API Integration

Cross-API integration across the Protected Audience and Attribution Reporting APIs enables adtechs to evaluate their attribution performance across various remarketing tactics in order to understand which types of audiences deliver the highest ROI.

Through this cross-API integration, adtechs can:

  • Create a key-value map of URIs to be used for both 1) interaction reporting and 2) source registration.
  • Include CustomAudience in their source-side key mapping for aggregate summary reporting (using the Attribution Reporting API).

When a user sees or clicks on an ad:

  • The URL used to report those interactions using Protected Audience will also be used to register a view or click as an eligible source with the Attribution Reporting API.
  • The ad tech may choose to pass CustomAudience (or other relevant contextual information about the ad such as ad placement or view duration) using that URL so this metadata can propagate down to summary reports when the ad tech is reviewing aggregate campaign performance.

For more information on how this is enabled within Protected Audience, see the relevant section of the Protected Audience API explainer.

Registration priority, attribution, and reporting examples

This example showcases a set of user interactions and how ad tech defined attribution source and trigger priorities could affect attributed reports. In this example, we assume the following:

  • All attribution sources and triggers are registered by the same ad tech, for the same advertiser.
  • All attribution sources and triggers are happening during the first event reporting window (within 2 days of initially displaying the ads in a publisher app).

Consider the case where a user does the following:

  1. The user sees an ad. Ad tech registers an attribution source with the API, with a priority of 0 (view #1).
  2. The user sees an ad, registered with a priority of 0 (view #2).
  3. The user clicks an ad, registered with a priority of 1 (click #1).
  4. The user converts (reaches landing page) in an advertiser app. The ad tech registers a trigger with the API, with a priority of 0 (conversion #1).
    • As triggers are registered, the API performs attribution first before generating reports.
    • There are 3 attribution sources available: view #1, view #2, and click #1. The API attributes this trigger to click #1 because it's the highest priority and most recent.
    • View #1 and view #2 are discarded and are no longer eligible for future attribution.
  5. The user adds an item to their cart in the advertiser app, registered with a priority of 1 (conversion #2).
    • Click #1 is the only eligible attribution source. The API attributes this trigger to click #1.
  6. The user adds an item to their cart in the advertiser app, registered with a priority of 1 (conversion #3).
    • Click #1 is the only eligible attribution source. The API attributes this trigger to click #1.
  7. The user adds an item to their cart in the advertiser app, registered with a priority of 1 (conversion #4).
    • Click #1 is the only eligible attribution source. The API attributes this trigger to click #1.
  8. The user makes a purchase in the advertiser app, registered with a priority of 2 (conversion #5).
    • Click #1 is the only eligible attribution source. The API attributes this trigger to click #1.

Event-level reports have the following characteristics:

  • By default, the first 3 triggers attributed to a click and the first trigger attributed to a view are sent out after applicable reporting windows.
  • Within the reporting window, if there are triggers registered with higher priority, those take precedence and replace the most recent trigger.
  • In the preceding example, the ad tech receives 3 event reports after the 2-day reporting window, for conversion #2, conversion #3, and conversion #5.
    • All 5 triggers are attributed to click #1. By default, the API would send out reports for the first 3 triggers: conversion #1, conversion #2, and conversion #3.
    • However, conversion #4's priority (1) is higher than conversion #1's priority (0). Conversion #4's event report replaces conversion #1's event report to be sent out.
    • Additionally, conversion #5's priority (2) is higher than any other trigger. Conversion #5's event report replaces conversion #4's report to be sent out.

Aggregatable reports have the following characteristics:

  • Encrypted aggregatable reports are sent to the ad tech as soon as they are processed, a few hours after the triggers are registered.

    As an ad tech, you create their batches based on the information that comes unencrypted in your aggregatable reports. This information is contained in the shared_info field in your aggregatable report, and includes timestamp and reporting origin. You can't batch based on any encrypted information in your aggregation key-value pairs. Some simple strategies you can follow are batching reports daily or weekly. Ideally, batches should contain at least 100 reports each.

  • It's up to the ad tech on when and how to batch the aggregatable reports and send to the aggregation service.

  • Compared to event-level reports, encrypted aggregatable reports can attribute more triggers to a source.

  • In the preceding example, 5 aggregatable reports are sent out, one for each registered trigger.

Transitional debugging reports

The Attribution Reporting API is a new and fairly complex way to do attribution measurement without cross-app identifiers. As such, we are supporting a transitional mechanism to learn more information about attribution reports when the advertising ID is available (the user has not opted out of personalization using the advertising ID and the publisher or advertiser app has declared AdID permissions). This ensures that the API can be fully understood during roll-out, help flush out any bugs, and more easily compare the performance to advertising ID-based alternatives. There are two types of debugging reports: attribution-success and verbose.

Read the guide on transitional debugging reports for details on debugging reports with app-to-web and web-to-app measurement.

Attribution-success debugging reports

Source and trigger registrations both accept a new 64-bit debug_key field (formatted as a String), which the ad tech populates. source_debug_key and trigger_debug_key are passed unaltered in both event-level and aggregate reports.

If a report is created with both source and trigger debug keys, a duplicate debug report is sent with limited delay to a .well-known/attribution-reporting/debug/report-event-attribution endpoint. The debug reports are identical to normal reports, including both debug key fields. Including these keys in both allows tying normal reports to the separate stream of debug reports.

  • For event-level reports:
    • Duplicate debug reports are sent with limited delay and therefore aren't suppressed by limits on available triggers, which allows ad tech to understand the impact of those limits for event-level reports.
    • Event-level reports associated with false trigger events will not have trigger_debug_keys. This allows ad tech to more closely understand how noise is applied in the API.
  • For aggregatable reports:
    • We will support a new debug_cleartext_payload field which contains the decrypted payload, only if both source_debug_key and trigger_debug_key are set.

Verbose debugging reports

Verbose debugging reports allow developers to monitor certain failures in the attribution source or trigger registrations. These debugging reports are sent with limited delay after attribution source or trigger registrations to a .well-known/attribution-reporting/debug/verbose endpoint.

Each verbose report contains the following fields:

  • Type: what caused the report to be generated. See the full list of verbose report types.
    • In general, there are source verbose reports and trigger verbose reports.
    • Source verbose reports require the advertising ID to be available to the publisher app, and trigger verbose reports require the advertising ID to be available to the advertiser app.
    • Trigger verbose reports (with the exception of trigger-no-matching-source) can optionally include the source_debug_key. This can only be included if the advertising ID is also available to the publisher app.
  • Body: The report's body, which depends on its type.

Ad techs need to opt in to receive verbose debugging reports using a new debug_reporting dictionary field in the Attribution-Reporting-Register_Source and Attribution-Reporting-Register-Trigger headers.

  • Source verbose reports require opt-in on the source registration header only.
  • Trigger debug reports require opt-in on the trigger registration header only.

How to use debug reports

If a conversion took place (according to your existing measurement system) and a debug report was received for that conversion, this means the trigger was successfully registered.

For each debug attribution report, check if you're receiving a regular attribution report that matches the two debug keys.

When there's no match, it can be for a number of reasons.

Works as intended:

  • Privacy-preserving API behaviors:
    • A user hits the report rate limit—causing all subsequent reports to not be sent in the time period; or a source is removed due to the pending destination limit.
    • For event-level reports: the report is subject to randomized response (noise) and is suppressed, or you may receive a randomized report.
    • For event-level reports: the limit of three (for clicks) or one (for views) reports has been reached, and subsequent reports have no explicit priority set, or a priority that is lower than existing reports.
    • The contribution limits for aggregatable reports have been exceeded.
  • Ad tech-defined business logic:
    • A trigger is filtered out via filters or priority rules.
  • Time delays or interactions with network availability (e.g., the user turns off their device for an extended period of time).

Unintended causes:

  • Implementation issues:
    • The source header is misconfigured.
    • The trigger header is misconfigured.
    • Other configuration issues.
  • Device or network issues:
    • Failures due to network conditions.
    • Source or trigger registration response doesn't reach the client.
    • API bug.

Future considerations & open questions

The Attribution Reporting API is a work in progress. We're also exploring future potential features, such as non-last-click attribution models and cross-device measurement use cases.

Additionally, we'd like to seek feedback from the community on a few issues:

  1. Are there any use cases where you'd like the API to send out reports for the verified install? These reports would count against ad tech platforms' respective rate limits.
  2. Do you foresee any difficulties with passing the InputEvent from the app to ad tech for source registration?
  3. Do you have any special attribution use cases for pre-loaded apps or re-installed apps?