Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Dependendo do seu inventário, pode ser necessário dividir os feeds em vários
arquivos.
Quando usar o sharding
O feed excede 200 MB para um arquivo (após a compactação gzip).
Exemplo:o feed de disponibilidade gerado é de 1 GB. Ele precisa ser
fragmentado em mais de cinco arquivos separados (ou fragmentos).
O inventário do parceiro é distribuído entre sistemas e/ou regiões,
resultando em dificuldade para reconciliar o inventário.
Exemplo:o parceiro tem inventário dos EUA e da UE em sistemas
separados. O feed pode ser gerado com dois arquivos (ou fragmentos), um para os EUA e outro para a UE, com o mesmo nonce e generation_timestamp.
Regras gerais
Cada fragmento não pode exceder 200 MB para um arquivo (após a compactação gzip).
Recomendamos não usar mais de 20 fragmentos por feed. Se você tiver uma justificativa comercial que
exija mais do que esse valor, entre em contato com o suporte para receber mais instruções.
Registros individuais (um objeto Merchant, por exemplo) precisam ser enviados em um fragmento. Eles não podem ser divididos em vários fragmentos. No entanto, eles não precisam ser enviados no fragmento
com o mesmo shard_number para feeds futuros.
Para um melhor desempenho, seus dados devem ser divididos igualmente entre os
fragmentos. Assim, todos os arquivos terão um tamanho parecido.
Como fragmentar feeds
Para cada arquivo (ou fragmento), defina o FeedMetadata como
o seguinte:
processing_instruction definido como
PROCESS_AS_COMPLETE.
shard_number definido como o fragmento atual do feed
(iniciando de 0 a total_shards - 1 sem descontinuidades)
total_shards definido como o número total de fragmentos do feed (a partir de 1).
nonce definido como um identificador exclusivo igual
em todos os fragmentos do mesmo feed, mas diferente do valor de
outros feeds. nonce precisa ser um int positivo (uint64).
generation_timestamp é o carimbo de data/hora no formato Unix e EPOCH. Ele precisa ser o mesmo em todos os fragmentos do feed.
Recomendado:para cada arquivo (ou fragmento), defina o nome do arquivo para indicar o tipo de feed, o carimbo de data/hora, o número do fragmento e o número total de fragmentos. Os fragmentos precisam ter tamanhos aproximadamente iguais e são processados quando todos os
fragmentos são enviados.
Como usar a fragmentação para inventário distribuído por parceiros
Pode ser difícil para os parceiros consolidar o inventário distribuído
em vários sistemas e/ou regiões em um único feed. O fragmentação pode ser
usado para resolver problemas de reconciliação definindo cada fragmento para corresponder a cada
conjunto de inventário do sistema distribuído.
Por exemplo, digamos que o inventário de um parceiro seja dividido em duas regiões (inventário dos EUA e da UE), que estão em dois sistemas separados.
O parceiro pode dividir cada feed em dois arquivos (ou fragmentos):
Feed de comerciantes: 1 fragmento para os EUA e 1 para a UE
Feed de serviços: 1 fragmento para os EUA e 1 para a UE
Feed de disponibilidade: 1 fragmento para os EUA e 1 para a UE
Siga as etapas abaixo para garantir que os feeds sejam processados corretamente:
Defina uma programação de upload e configure cada instância de inventário para
seguir essa programação.
Atribua números de fragmento exclusivos para cada instância (por exemplo, EUA = N, UE = N + 1).
Defina total_shards como o número total de fragmentos.
Em cada horário de envio programado, escolha um
generation_timestamp e um nonce. No
FeedMetadata, defina todas as instâncias para manter os mesmos valores para
esses dois campos.
O generation_timestamp precisa ser atual ou recente
(de preferência, o carimbo de data/hora de leitura do parceiro no banco de dados)
Depois que todos os fragmentos são enviados, o Google os agrupa usando
generation_timestamp e nonce.
O Google processa o feed como um único feed, mesmo que cada fragmento represente uma
região diferente do inventário do parceiro e possa ser enviado em um
horário diferente do dia, desde que o generation_timestamp
seja o mesmo em todos os fragmentos.
Exemplo de feed de disponibilidade fragmentado por região
[null,null,["Última atualização 2025-07-26 UTC."],[[["\u003cp\u003eSharding, or splitting feeds into multiple files, is recommended when a single feed file exceeds 200 MB after gzip compression or when inventory is distributed across various systems.\u003c/p\u003e\n"],["\u003cp\u003eEach shard should be under 200 MB after gzip compression, with a recommended maximum of 20 shards per feed.\u003c/p\u003e\n"],["\u003cp\u003eIndividual records must be contained within a single shard, and for better performance, data should be evenly distributed across shards.\u003c/p\u003e\n"],["\u003cp\u003eWhen sharding, include \u003ccode\u003eFeedMetadata\u003c/code\u003e in each file with information like \u003ccode\u003eshard_number\u003c/code\u003e, \u003ccode\u003etotal_shards\u003c/code\u003e, \u003ccode\u003enonce\u003c/code\u003e, and \u003ccode\u003egeneration_timestamp\u003c/code\u003e for proper processing.\u003c/p\u003e\n"],["\u003cp\u003ePartners with distributed inventory can utilize sharding to reconcile data from multiple systems by assigning a shard to each system's inventory and ensuring consistent \u003ccode\u003egeneration_timestamp\u003c/code\u003e and \u003ccode\u003enonce\u003c/code\u003e values across all shards.\u003c/p\u003e\n"]]],["Sharding, or breaking up feeds into multiple files, is used when a feed exceeds 200 MB post-compression or when inventory is spread across systems/regions. Each shard must be under 200 MB, with up to 20 shards per feed. Key actions include setting `FeedMetadata` with `processing_instruction`, unique `shard_number`, `total_shards`, shared `nonce`, and `generation_timestamp`. Distribute data evenly among shards, and avoid splitting individual records. Once all shards are uploaded, Google processes them as a complete feed.\n"],null,["# Shard feed files\n\nDepending on your inventory, sharding (or breaking up feeds into multiple\nfiles) may be necessary.\n| **Note:** Sharding might only be applicable to some of the feeds you submit and is dependent on the type of inventory submitted. Please reach out to your Google contact if you are unsure of the best approach.\n\nWhen to use sharding\n--------------------\n\n- Feed exceeds 200 MB for 1 file (after gzip compression).\n\n - **Example:** Generated availability feed is 1 GB. This should be sharded to 5+ separate files (or shards).\n- Partner inventory is distributed across systems and/or regions\n resulting in difficulty reconciling the inventory.\n\n - **Example:** Partner has US and EU inventory that live in separate systems. The feed may be generated with 2 files (or shards), 1 for US, and 1 for EU with the same `nonce` and `generation_timestamp`.\n\n| **Note:** Before using sharding, make sure you are [compressing your feed uploads with gzip](/actions-center/verticals/reservations/waitlists/reference/tutorials/compression). Using gzip can reduce feed size by 10x or more, and may allow you to skip or defer sharding your feed.\n\nGeneral rules\n-------------\n\n- Each shard cannot exceed 200 MB for 1 file (after gzip compression).\n- We recommend no more than 20 shards per feed. If you have a business justification that requires more than that amount, please contact support for further instruction.\n- Individual records (one `Merchant` object for example) must be sent in one shard, they cannot be split across multiple shards. However, they don't have to be sent in the shard with the same `shard_number` for future feeds.\n- For better performance, your data should be split evenly among the shards so that all sharded files are similar in size.\n\n| **Note:** Google processes feed files as soon as they're uploaded to the SFTP server. If the feed is sharded into multiple files, the process begins after you upload the last file. If your feed contains errors, you receive an email with the [feed error codes](/actions-center/verticals/reservations/waitlists/reference/feeds/feed-errors).\n\nHow to shard feeds\n------------------\n\nFor each file (or shard), set the `FeedMetadata` to the\nfollowing:\n\n- `processing_instruction`set to `PROCESS_AS_COMPLETE`.\n- `shard_number` set to to the current shard of the feed (starting from 0 to `total_shards` - 1 without discontinuities)\n- `total_shards` set to the total number of shards for the feed (starting from 1).\n- `nonce` set to a unique identifier that is **the same** across all shards of **the same** feed but different from the value of other feeds. `nonce` must be a positive int (`uint64`).\n- `generation_timestamp` is the timestamp in unix and EPOCH format. This should be **the same** across all shards of the feed.\n\n*Recommended:* For each file (or shard), set the filename to indicate\nthe feed type, the timestamp, the shard number, and the total number of\nshards. Shards should be roughly equal in size and are processed once all\nshards are uploaded.\n\n- `Example:` \"availability_feed_1574117613_001_of_002.json.gz\"\n\n**Sharded Availability feed example** \n\n### Shard 0\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 0,\n \"total_shards\": 3,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1577275200,\n \"merchant_id\": \"merchant1\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```\n\n### Shard 1\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 1,\n \"total_shards\": 3,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1577620800,\n \"merchant_id\": \"merchant2\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```\n\n### Shard 2\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 2,\n \"total_shards\": 3,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1576670400,\n \"merchant_id\": \"merchant3\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```\n\nUsing sharding for partner distributed inventory\n------------------------------------------------\n\nIt can be challenging for partners to consolidate inventory distributed\nacross multiple systems and or regions into a single feed. Sharding can be\nused to resolve reconciliation challenges by setting each shard to match each\ndistributed system's inventory set.\n\nFor example, say a partner's inventory is separated into 2 regions (US and EU\ninventory), which live in 2 separate systems.\n\nThe partner can break each feed into 2 files (or shards):\n\n- Merchants feed: 1 shard for US, 1 shard for EU\n- Services feed: 1 shard for US, 1 shard for EU\n- Availability feed: 1 shard for US, 1 shard for EU\n\nFollow the steps below to ensure the feeds are properly processed:\n\n1. Decide on an upload schedule, and configure each instance of inventory to follow the schedule.\n2. Assign unique shard numbers for each instance (e.g. US = N, EU = N + 1). Set `total_shards` to the total number of shards.\n3. At each scheduled upload time, decide on a `generation_timestamp` and `nonce`. In the `FeedMetadata`, set all instances to hold the same values for these two fields.\n - `generation_timestamp` should be current or recent past (ideally, the partner's read-at database timestamp)\n4. After all shards are uploaded, Google groups the shards via `generation_timestamp` and `nonce`.\n\n| **Note:** Feeds/shards arriving separately at different times is supported, but coordinated schedules is best. Feed processing occurs only when all shards in a feed set are uploaded.\n\nGoogle will process the feed as one even though each shard represents a\ndifferent region of the partner's inventory and could be uploaded at a\ndifferent time of the day as long as the `generation_timestamp`\nis the same across all shards.\n\n**Sharded Availability feed example by region** \n\n### Shard 0 - US Inventory\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 0,\n \"total_shards\": 2,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1577275200,\n \"merchant_id\": \"US_merchant_1\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```\n\n### Shard 1 - EU Inventory\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 1,\n \"total_shards\": 2,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1577620800,\n \"merchant_id\": \"EU_merchant_1\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```"]]