Organiza tus páginas con colecciones
Guarda y categoriza el contenido según tus preferencias.
Según tu inventario, es posible que debas fragmentar (o dividir los feeds en varios
archivos).
Cuándo usar el particionado
El feed supera los 200 MB para 1 archivo (después de la compresión gzip).
Ejemplo: El feed de disponibilidad generado es de 1 GB. Debe estar fragmentado en más de 5 archivos (o fragmentos) separados.
El inventario de socios se distribuye en sistemas o regiones, lo que dificulta la conciliación del inventario.
Ejemplo: El socio tiene inventario de EE.UU. y la UE que se encuentra en sistemas separados. El feed se puede generar con 2 archivos (o fragmentos), 1 para EE.UU. y 1 para la UE, con los mismos nonce y generation_timestamp.
Reglas Generales
Cada fragmento no puede superar los 200 MB para 1 archivo (después de la compresión gzip).
Recomendamos no más de 20 fragmentos por feed. Si tienes una justificación comercial que requiera más de ese importe, comunícate con el equipo de asistencia para obtener más instrucciones.
Los registros individuales (por ejemplo, un objeto Merchant) se deben enviar en un fragmento y no se pueden dividir en varios fragmentos. Sin embargo, no es necesario que se envíen en el fragmento
con el mismo shard_number para los feeds futuros.
Para obtener un mejor rendimiento, divide los datos de manera uniforme entre los fragmentos para que todos tengan un tamaño similar.
Cómo fragmentar feeds
Para cada archivo (o fragmento), establece FeedMetadata en lo siguiente:
processing_instruction se estableció en PROCESS_AS_COMPLETE.
shard_number establecido en el fragmento actual del feed (desde 0 hasta total_shards - 1 sin discontinuidades)
total_shards establecido en la cantidad total de fragmentos del feed (a partir de 1)
nonce establecido en un identificador único que es el mismo en todos los fragmentos del mismo feed, pero diferente del valor de otros feeds nonce debe ser un int positivo (uint64).
generation_timestamp es la marca de tiempo en formato UNIX y EPOCH. Debe ser el mismo en todos los fragmentos del feed.
Recomendado: Para cada archivo (o fragmento), establece el nombre del archivo para indicar el tipo de feed, la marca de tiempo, el número de fragmento y la cantidad total de fragmentos. Los fragmentos deben tener aproximadamente el mismo tamaño y se procesan una vez que se suben todos.
Usa la fragmentación para el inventario distribuido por socios
Para los socios, puede ser un desafío consolidar el inventario distribuido en varios sistemas o regiones en un solo feed. El particionamiento se puede usar para resolver los desafíos de conciliación configurando cada fragmento para que coincida con cada conjunto de inventario del sistema distribuido.
Por ejemplo, supongamos que el inventario de un socio está separado en 2 regiones (inventario de EE.UU. y de la UE), que se encuentran en 2 sistemas independientes.
El socio puede dividir cada feed en 2 archivos (o fragmentos):
Feed de comercios: 1 fragmento para EE.UU. y 1 fragmento para la UE
Feed de servicios: 1 fragmento para EE.UU. y 1 fragmento para la UE
Feed de disponibilidad: 1 fragmento para EE.UU. y 1 fragmento para la UE
Sigue los pasos que se indican a continuación para asegurarte de que los feeds se procesen correctamente:
Elige un programa de carga y configura cada instancia del inventario para que siga el programa.
Asigna números de fragmento únicos para cada instancia (p. ej., EE.UU. = N, UE = N + 1).
Establece total_shards en la cantidad total de fragmentos.
En cada hora de carga programada, elige un generation_timestamp y un nonce. En FeedMetadata, establece todas las instancias para que contengan los mismos valores para estos dos campos.
generation_timestamp debe ser actual o reciente (idealmente, la marca de tiempo de la base de datos de lectura del socio).
Después de subir todos los fragmentos, Google los agrupa a través de generation_timestamp y nonce.
Google procesará el feed como uno solo, aunque cada fragmento represente una región diferente del inventario del socio y se pueda subir en un momento diferente del día, siempre y cuando el generation_timestamp sea el mismo en todos los fragmentos.
Ejemplo de feed de disponibilidad fragmentada por región
[null,null,["Última actualización: 2025-07-26 (UTC)"],[[["\u003cp\u003eSharding, or splitting feeds into multiple files, is recommended when a single feed file exceeds 200 MB after gzip compression or when inventory is distributed across different systems.\u003c/p\u003e\n"],["\u003cp\u003eEach shard should be under 200 MB after gzip compression, with a recommended maximum of 20 shards per feed.\u003c/p\u003e\n"],["\u003cp\u003eAll shards for a single feed must use the same \u003ccode\u003enonce\u003c/code\u003e and \u003ccode\u003egeneration_timestamp\u003c/code\u003e in their metadata for proper processing.\u003c/p\u003e\n"],["\u003cp\u003eWhen sharding, individual records must be kept within a single shard and cannot be split across multiple shards.\u003c/p\u003e\n"],["\u003cp\u003eFor distributed inventory, sharding can be used to represent different regions or systems by assigning unique shard numbers while maintaining consistent \u003ccode\u003enonce\u003c/code\u003e and \u003ccode\u003egeneration_timestamp\u003c/code\u003e.\u003c/p\u003e\n"]]],["Sharding divides large or geographically diverse feeds into multiple files. Use it when a feed exceeds 200MB after compression or when inventory is in separate systems/regions. Each shard must be under 200MB, with no more than 20 shards per feed. Assign each shard a unique `shard_number` and a common `nonce` and `generation_timestamp`. Shards are processed after the last file is uploaded. Ensure even data distribution across shards for performance, and keep individual records within a single shard.\n"],null,["# Shard feed files\n\nDepending on your inventory, sharding (or breaking up feeds into multiple\nfiles) may be necessary.\n| **Note:** Sharding might only be applicable to some of the feeds you submit and is dependent on the type of inventory submitted. Please reach out to your Google contact if you are unsure of the best approach.\n\nWhen to use sharding\n--------------------\n\n- Feed exceeds 200 MB for 1 file (after gzip compression).\n\n - **Example:** Generated availability feed is 1 GB. This should be sharded to 5+ separate files (or shards).\n- Partner inventory is distributed across systems and/or regions\n resulting in difficulty reconciling the inventory.\n\n - **Example:** Partner has US and EU inventory that live in separate systems. The feed may be generated with 2 files (or shards), 1 for US, and 1 for EU with the same `nonce` and `generation_timestamp`.\n\n| **Note:** Before using sharding, make sure you are [compressing your feed uploads with gzip](/actions-center/verticals/local-services/e2e/reference/tutorials/compression). Using gzip can reduce feed size by 10x or more, and may allow you to skip or defer sharding your feed.\n\nGeneral rules\n-------------\n\n- Each shard cannot exceed 200 MB for 1 file (after gzip compression).\n- We recommend no more than 20 shards per feed. If you have a business justification that requires more than that amount, please contact support for further instruction.\n- Individual records (one `Merchant` object for example) must be sent in one shard, they cannot be split across multiple shards. However, they don't have to be sent in the shard with the same `shard_number` for future feeds.\n- For better performance, your data should be split evenly among the shards so that all sharded files are similar in size.\n\n| **Note:** Google processes feed files as soon as they're uploaded to the SFTP server. If the feed is sharded into multiple files, the process begins after you upload the last file. If your feed contains errors, you receive an email with the [feed error codes](/actions-center/verticals/local-services/e2e/reference/feeds/feed-errors).\n\nHow to shard feeds\n------------------\n\nFor each file (or shard), set the `FeedMetadata` to the\nfollowing:\n\n- `processing_instruction`set to `PROCESS_AS_COMPLETE`.\n- `shard_number` set to to the current shard of the feed (starting from 0 to `total_shards` - 1 without discontinuities)\n- `total_shards` set to the total number of shards for the feed (starting from 1).\n- `nonce` set to a unique identifier that is **the same** across all shards of **the same** feed but different from the value of other feeds. `nonce` must be a positive int (`uint64`).\n- `generation_timestamp` is the timestamp in unix and EPOCH format. This should be **the same** across all shards of the feed.\n\n*Recommended:* For each file (or shard), set the filename to indicate\nthe feed type, the timestamp, the shard number, and the total number of\nshards. Shards should be roughly equal in size and are processed once all\nshards are uploaded.\n\n- `Example:` \"availability_feed_1574117613_001_of_002.json.gz\"\n\n**Sharded Availability feed example** \n\n### Shard 0\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 0,\n \"total_shards\": 3,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1577275200,\n \"merchant_id\": \"merchant1\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```\n\n### Shard 1\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 1,\n \"total_shards\": 3,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1577620800,\n \"merchant_id\": \"merchant2\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```\n\n### Shard 2\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 2,\n \"total_shards\": 3,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1576670400,\n \"merchant_id\": \"merchant3\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```\n\nUsing sharding for partner distributed inventory\n------------------------------------------------\n\nIt can be challenging for partners to consolidate inventory distributed\nacross multiple systems and or regions into a single feed. Sharding can be\nused to resolve reconciliation challenges by setting each shard to match each\ndistributed system's inventory set.\n\nFor example, say a partner's inventory is separated into 2 regions (US and EU\ninventory), which live in 2 separate systems.\n\nThe partner can break each feed into 2 files (or shards):\n\n- Merchants feed: 1 shard for US, 1 shard for EU\n- Services feed: 1 shard for US, 1 shard for EU\n- Availability feed: 1 shard for US, 1 shard for EU\n\nFollow the steps below to ensure the feeds are properly processed:\n\n1. Decide on an upload schedule, and configure each instance of inventory to follow the schedule.\n2. Assign unique shard numbers for each instance (e.g. US = N, EU = N + 1). Set `total_shards` to the total number of shards.\n3. At each scheduled upload time, decide on a `generation_timestamp` and `nonce`. In the `FeedMetadata`, set all instances to hold the same values for these two fields.\n - `generation_timestamp` should be current or recent past (ideally, the partner's read-at database timestamp)\n4. After all shards are uploaded, Google groups the shards via `generation_timestamp` and `nonce`.\n\n| **Note:** Feeds/shards arriving separately at different times is supported, but coordinated schedules is best. Feed processing occurs only when all shards in a feed set are uploaded.\n\nGoogle will process the feed as one even though each shard represents a\ndifferent region of the partner's inventory and could be uploaded at a\ndifferent time of the day as long as the `generation_timestamp`\nis the same across all shards.\n\n**Sharded Availability feed example by region** \n\n### Shard 0 - US Inventory\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 0,\n \"total_shards\": 2,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1577275200,\n \"merchant_id\": \"US_merchant_1\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```\n\n### Shard 1 - EU Inventory\n\n```scdoc\n{\n \"metadata\": {\n \"processing_instruction\": \"PROCESS_AS_COMPLETE\",\n \"shard_number\": 1,\n \"total_shards\": 2,\n \"nonce\": 111111,\n \"generation_timestamp\": 1524606581\n },\n \"service_availability\": [\n {\n \"availability\": [\n {\n \"spots_total\": 1,\n \"spots_open\": 1,\n \"duration_sec\": 3600,\n \"service_id\": \"1000\",\n \"start_sec\": 1577620800,\n \"merchant_id\": \"EU_merchant_1\",\n \"confirmation_mode\": \"CONFIRMATION_MODE_SYNCHRONOUS\"\n }\n ]\n }\n ]\n}\n```"]]