Improved handling of URLs with parameters

Friday, July 22, 2011

You may have noticed that the Parameter Handling feature disappeared from the Site configuration > Settings section of Webmaster Tools. Fear not; you can now find it under its new name, URL Parameters! Along with renaming it, we refreshed and improved the feature. We hope you'll find it even more useful. Configuration of URL parameters made in the old version of the feature will be automatically visible in the new version. Before we reveal all the cool things you can do with URL parameters now, let us remind you (or introduce, if you are new to this feature) of the purpose of this feature and when it may come in handy.

When to use

URL Parameters helps you control which URLs on your site should be crawled by Googlebot, depending on the parameters that appear in these URLs. This functionality provides a simple way to prevent crawling duplicate content on your site. Now, your site can be crawled more effectively, reducing your bandwidth usage and likely allowing more unique content from your site to be indexed. If you suspect that Googlebot's crawl coverage of the content on your site could be improved, using this feature can be a good idea. But with great power comes great responsibility! You should only use this feature if you're sure about the behavior of URL parameters on your site. Otherwise you might mistakenly prevent some URLs from being crawled, making their content no longer accessible to Googlebot.

Parameter view for 'page' within the Webmaster Tools URL Parameter tool

A lot more to do

Okay, let's talk about what's new and improved. To begin with, in addition to assigning a crawl action to an individual parameter, you can now also describe the behavior of the parameter. You start by telling us whether or not the parameter changes the content of the page. If the parameter doesn't affect the page's content then your work is done; Googlebot will choose URLs with a representative value of this parameter and will crawl the URLs with this value. Since the parameter doesn't change the content, any value chosen is equally good. However, if the parameter does change the content of a page, you can now assign one of four possible ways for Google to crawl URLs with this parameter:

  • Let Googlebot decide
  • Every URL
  • Only crawl URLs with value=x
  • No URLs

We also added the ability to provide your own specific value to be used, with the "Only URLs with value=x" option; you're no longer restricted to the list of values that we provide. Optionally, you can also tell us exactly what the parameter does—whether it sorts, paginates, determines content, etc. One last improvement is that for every parameter, we'll try to show you a sample of example URLs from your site that Googlebot crawled which contain that particular parameter.

Of the four crawl options listed above, "No URLs" is new and deserves special attention. This option is the most restrictive and, for any given URL, takes precedence over settings of other parameters in that URL. This means that if the URL contains a parameter that is set to the "No URLs" option, this URL will never be crawled, even if other parameters in the URL are set to "Every URL." You should be careful when using this option. The second most restrictive setting is "Only URLs with value=x."

Feature in use

Now let's do something fun and exercise our brains on an example.

Once upon a time there was an online store, fairyclothes.example.com. The store's website used parameters in its URLs, and the same content could be reached through multiple URLs. One day the store owner noticed, that too many redundant URLs could be preventing Googlebot from crawling the site thoroughly. So he sent his assistant CuriousQuestionAsker to The GreatWebWizard to get advice on using the URL parameters feature to reduce the duplicate content crawled by Googlebot. The Great WebWizard was famous for their wisdom. They looked at the URL parameters and proposed the configuration as shown in the following table:

Parameter name Effect on content? What should Googlebot crawl?
trackingId None One representative URL
sortOrder Sorts Only URLs with value='lowToHigh'
sortBy Sorts Only URLs with value='price'
filterByColor Narrows No URLs
itemId Specifies Every URL
page Paginates Every URL

The CuriousQuestionAsker couldn't avoid their nature and started asking questions:

CuriousQuestionAsker: You've instructed Googlebot to choose a representative URL for trackingId (value to be chosen by Googlebot). Why not select the Only URLs with value=x option and choose the value myself?
Great WebWizard: While crawling the web Googlebot encountered the following URLs that link to your site:

  1. fairyclothes.example.com/skirts/?trackingId=aaa123
  2. fairyclothes.example.com/skirts/?trackingId=aaa124
  3. fairyclothes.example.com/trousers/?trackingId=aaa125

Imagine that you were to tell Googebot to only crawl URLs where trackingId=aaa125. In that case Googlebot would not crawl URLs 1 and 2 as neither of them has the value aaa125 for trackingId. Their content would neither be crawled nor indexed and none of your inventory of fine skirts would show up in Google's search results. No, for this case choosing a representative URL is the way to go. Why? Because that tells Googlebot that when it encounters two URLs on the web that differ only in this parameter (as URLs 1 and 2 above do) then it only needs to crawl one of them (either will do) and it will still get all the content. In the example above two URLs will be crawled; either 1 and 3, or 2 and 3. Not a single skirt or trouser will be lost.

CuriousQuestionAsker: What about the sortOrder parameter? I don't care if the items are listed in ascending or descending order. Why not let Google select a representative value?
Great WebWizard: As Googlebot continues to crawl it may find the following URLs:

  1. fairyclothes.example.com/skirts/