使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
验证 Googlebot 和其他 Google 抓取工具
您可以验证访问您服务器的网页抓取工具是否确实是 Google 抓取工具,例如 Googlebot。如果您担心自称是 Googlebot 的垃圾内容发布者或其他麻烦制造者在访问您的网站,则会发现该方法非常有用。
Google 抓取工具分为三类:
类型 |
说明 |
反向 DNS 掩码 |
IP 范围 |
常见抓取工具 |
用于 Google 产品的常见抓取工具(例如 Googlebot)。它们始终会遵循自动抓取的 robots.txt 规则。
|
crawl-***-***-***-***.googlebot.com 或 geo-crawl-***-***-***-***.geo.googlebot.com
|
googlebot.json |
特殊情况下的抓取工具 |
为 Google 产品(例如 AdsBot)执行特定功能的抓取工具,并且被抓取的网站与产品之间会就抓取过程达成协议。这些抓取工具可能遵循 robots.txt 规则,也可能不遵循。
|
rate-limited-proxy-***-***-***-***.google.com |
special-crawlers.json |
用户触发的抓取器 |
最终用户触发抓取操作的工具和产品功能。例如,Google 网站验证工具会响应用户请求。由于是用户请求的抓取,因此这些抓取器会忽略 robots.txt 规则。
由 Google 控制的抓取工具源自 user-triggered-fetchers-google.json 对象中的 IP,并解析为 google.com 主机名。user-triggered-fetchers.json 对象中的 IP 解析为 gae.googleusercontent.com 主机名。例如,如果 Google Cloud (GCP) 上运行的网站具有需要根据该网站用户的请求提取外部 RSS Feed 的功能,便会使用这些 IP。
|
***-***-***-***.gae.googleusercontent.com 或
google-proxy-***-***-***-***.google.com
|
user-triggered-fetchers.json 和 user-triggered-fetchers-google.json
|
验证 Google 抓取工具的方法有两种:
-
手动验证:如果是一次性查找,请使用命令行工具。对于大多数用例,此方法足以满足需求。
-
自动验证:如果是大规模查找,请使用自动解决方案将抓取工具的 IP 地址与已发布的 Googlebot IP 地址列表进行比对。
使用命令行工具
-
使用
host
命令对日志中访问服务器的 IP 地址运行 DNS 反向查找。
-
验证域名是
googlebot.com
、google.com
还是 googleusercontent.com
。
-
使用
host
命令对在第 1 步中检索到的域名运行 DNS 正向查找。
- 验证该域名与日志中访问服务器的原始 IP 地址是否一致。
示例 1:
host 66.249.66.1
1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com.
host crawl-66-249-66-1.googlebot.com
crawl-66-249-66-1.googlebot.com has address 66.249.66.1
示例 2:
host 35.247.243.240
240.243.247.35.in-addr.arpa domain name pointer geo-crawl-35-247-243-240.geo.googlebot.com.
host geo-crawl-35-247-243-240.geo.googlebot.com
geo-crawl-35-247-243-240.geo.googlebot.com has address 35.247.243.240
示例 3:
host 66.249.90.77
77.90.249.66.in-addr.arpa domain name pointer rate-limited-proxy-66-249-90-77.google.com.
host rate-limited-proxy-66-249-90-77.google.com
rate-limited-proxy-66-249-90-77.google.com has address 66.249.90.77
使用自动解决方案
或者,您也可以通过将抓取工具的 IP 地址与 Google 抓取工具和抓取器的 IP 范围列表进行比对,按 IP 地址识别 Googlebot:
对于可借以访问您网站的其他 Google IP 地址(例如通过 Apps 脚本),请将访问 IP 地址与通用 Google IP 地址列表进行比对。请注意,JSON 文件中的 IP 地址以 CIDR 格式表示。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-08-04。
[null,null,["最后更新时间 (UTC):2025-08-04。"],[[["\u003cp\u003eVerify if a web crawler is actually a Google crawler to prevent unauthorized access.\u003c/p\u003e\n"],["\u003cp\u003eGoogle has three crawler types: common crawlers, special-case crawlers, and user-triggered fetchers, each with varying adherence to robots.txt rules.\u003c/p\u003e\n"],["\u003cp\u003eTwo verification methods are available: manual verification using command-line tools for individual checks, and automatic verification by comparing IP addresses against published Googlebot IP lists for large-scale checks.\u003c/p\u003e\n"],["\u003cp\u003eGoogle provides JSON files listing IP ranges for different Googlebot types, allowing for automated verification and filtering.\u003c/p\u003e\n"]]],["To verify if a crawler is genuinely from Google, use reverse DNS lookups. Check if the domain is `googlebot.com`, `google.com`, or `googleusercontent.com`. Then, perform a forward DNS lookup on this domain and compare it to the original IP. Alternatively, automatically match the crawler's IP to Google's published IP ranges for common, special, or user-triggered fetchers. Use command-line tools for manual verification or IP-matching against provided JSON files for automation.\n"],null,["# Googlebot and Other Google Crawler Verification | Google Search Central\n\nVerifying Googlebot and other Google crawlers\n=============================================\n\n\nYou can verify if a web crawler accessing your server really is a\n[Google crawler](/search/docs/crawling-indexing/overview-google-crawlers), such as\nGooglebot. This is useful if you're concerned that spammers or other troublemakers are\naccessing your site while claiming to be Googlebot.\n\nGoogle's crawlers fall into three categories:\n\n| Type | Description | Reverse DNS mask | IP ranges |\n|------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [Common crawlers](/search/docs/crawling-indexing/google-common-crawlers) | The common crawlers used for Google's products (such as Googlebot). They always respect robots.txt rules for automatic crawls. | `crawl-***-***-***-***.googlebot.com` or `geo-crawl-***-***-***-***.geo.googlebot.com` | [googlebot.json](/static/search/apis/ipranges/googlebot.json) |\n| [Special-case crawlers](/search/docs/crawling-indexing/google-special-case-crawlers) | Crawlers that perform specific functions for Google products (such as AdsBot) where there's an agreement between the crawled site and the product about the crawl process. These crawlers may or may not respect robots.txt rules. | `rate-limited-proxy-***-***-***-***.google.com` | [special-crawlers.json](/static/search/apis/ipranges/special-crawlers.json) |\n| [User-triggered fetchers](/search/docs/crawling-indexing/google-user-triggered-fetchers) | Tools and product functions where the end user triggers a fetch. For example, [Google Site Verifier](https://support.google.com/webmasters/answer/9008080) acts on the request of a user. Because the fetch was requested by a user, these fetchers ignore robots.txt rules. Fetchers controlled by Google originate from IPs in the `user-triggered-fetchers-google.json` object and resolve to a `google.com` hostname. IPs in the `user-triggered-fetchers.json` object resolve to `gae.googleusercontent.com` hostnames. These IPs are used, for example, if a site running on Google Cloud (GCP) has a feature that requires fetching external RSS feeds on the request of the user of that site. | `***-***-***-***.gae.googleusercontent.com` or `google-proxy-***-***-***-***.google.com` | [user-triggered-fetchers.json](/static/search/apis/ipranges/user-triggered-fetchers.json) and [user-triggered-fetchers-google.json](/static/search/apis/ipranges/user-triggered-fetchers-google.json) |\n\nThere are two methods for verifying Google's crawlers:\n\n- [Manually](#manual): For one-off lookups, use command line tools. This method is sufficient for most use cases.\n- [Automatically](#automatic): For large scale lookups, use an automatic solution to match a crawler's IP address against the list of published Googlebot IP addresses.\n\nUse command line tools\n----------------------\n\n1. Run a reverse DNS lookup on the accessing IP address from your logs, using the `host` command.\n2. Verify that the domain name is either `googlebot.com`, `google.com`, or `googleusercontent.com`.\n3. Run a forward DNS lookup on the domain name retrieved in step 1 using the `host` command on the retrieved domain name.\n4. Verify that it's the same as the original accessing IP address from your logs.\n\n**Example 1:** \n\n host 66.249.66.1\n 1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com.\n\n host crawl-66-249-66-1.googlebot.com\n crawl-66-249-66-1.googlebot.com has address 66.249.66.1\n\n**Example 2:** \n\n host 35.247.243.240\n 240.243.247.35.in-addr.arpa domain name pointer geo-crawl-35-247-243-240.geo.googlebot.com.\n\n host geo-crawl-35-247-243-240.geo.googlebot.com\n geo-crawl-35-247-243-240.geo.googlebot.com has address 35.247.243.240\n\n**Example 3:** \n\n host 66.249.90.77\n 77.90.249.66.in-addr.arpa domain name pointer rate-limited-proxy-66-249-90-77.google.com.\n\n host rate-limited-proxy-66-249-90-77.google.com\n rate-limited-proxy-66-249-90-77.google.com has address 66.249.90.77\n\nUse automatic solutions\n-----------------------\n\n\nAlternatively, you can identify Googlebot by IP address by matching the crawler's IP address\nto the lists of Google crawlers' and fetchers' IP ranges:\n\n- [Common crawlers like Googlebot](/static/search/apis/ipranges/googlebot.json)\n- [Special crawlers like AdsBot](/static/search/apis/ipranges/special-crawlers.json)\n- [User triggered fetches (users)](/static/search/apis/ipranges/user-triggered-fetchers.json)\n- [User triggered fetches (Google)](/static/search/apis/ipranges/user-triggered-fetchers-google.json)\n\n\nFor other Google IP addresses from where your site may be accessed (for example,\n[Apps Scripts](/apps-script)), match the accessing IP address\nagainst the general\n[list of Google IP addresses](https://www.gstatic.com/ipranges/goog.json).\nNote that the IP addresses in the JSON files are represented in\n[CIDR format](https://wikipedia.org/wiki/Classless_Inter-Domain_Routing)."]]