Disallowing URL Parameters vs. Canonicalizing
-
Hi all,
I have a client that has a unique search setup. So they have Region pages (/state/city). We want these indexed and are using self-referential canonicals.
They also have a search function that emulates the look of the Region pages. When you search for, say, Los Angeles, the URL changes to _/search/los+angeles _and looks exactly like /ca/los-angeles.
These search URLs can also have parameters (/search/los+angeles?age=over-2&time[]=part-time), which we obviously don't want indexed.
Right now my concern is how best to ensure the /search pages don't get indexed and we don't get hit with duplicate content penalties. The options are this:
-
Self-referential canonicals for the Region pages, and disallow everything after the second slash in /search/ (so the main search page is indexed)
-
Self-referential canonicals for the Region pages, and write a rule that automatically canonicalizes all other search pages to /search.
Potential Concern: /search/ URLs are created even with misspellings.
Thanks!
-
-
Just so you know Meta no-index can be applied through the HTML but also through the HTTP header which might make it easier to implement on such a highly generated website
-
Yeah, I know the difference between the two, I've just been in a situation where canonicals were recommended as a means of controlling the preferred page _within an indexation context. _If that makes sense.
My biggest concern is with the creation of URLs from misspellings, which still return search results if it's close enough. The redirects could work. Honestly that wasn't something we considered.
I'm liking the noindex approach. They'd have to write a rule that applies it to every page created with a search parameter, which I think they should be able to do.
If it helps, almost the entire site is run by Javascript. Like...everything.
Thanks for the advice. Much appreciated.
-Brad
-
Robots.txt controls crawling, not indexation. Google will still sometimes index pages they cannot crawl. Canonical tags are for duplicate content consolidation, but are not a hard signal and Google frequently ignores them. Meta no-index tags (or X-robots no-index through the HTTP header, if you cannot apply Meta no-index in the HTML) is a harder signal and is meant to help you control indexation
To be honest if the pages are identical why not just 301 redirect the relevant searches (the top-line ones, which result in pages exactly the same as your regional ones) to the regional URLs? If the pages really are the same, it won't be any different for users except for a small delay during the redirect (which won't really be felt, especially if you are using Nginx redirects)
If you can't do that, you're really left with the Meta no-index tag and the canonical tag. Canonical tags avoid content duplication penalties but are a softer signal and they don't consolidate link equity like 301 redirects do (so in many way, there's not actually that much different between Meta no-index and canonicals, except canonical tags are more complex to set up in the first place as they require a destination field)
I'd probably just Meta no-index all the search URLs. Once Google had swallowed that, I would then (after 2-3 weeks) apply the relevant robots.txt rules
If you apply them both at the same time, Google won't be able to crawl the search URLs (since your robots.txt rule will block them) and therefore they will be blind to your canonical / Meta no index directive(s). So you have to handle de-indexation first, and THEN after that block the crawling to save your crawl allowance a bit
But don't do it all at once or you'll get in an unholy mess!
-
Hi there
Canonical tags prevent problems caused by identical or "duplicate" content across multiple URLs. So in this instance implement the disallow rule on al of the URLs containing /search/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content change within the same URL/Page (UX vs SEO)
Context: I'm asking my client to create city pages so he can present all of his appartements in that specific sector so i can have a page that ranks for "appartement for rent in +sector". The page will present a map with all the sector so the user can navigate and choose the sector he wants after he landed on the page. Question: The UX team is asking if we absolutly need to reload the sector page when the user is clicking the location on the map or if they can switch the content within the same page/url once the user is on the landing page. My concern: 1. Can this be analysed as duplicate content if Google can crawl within the javascript app or if Google only analyse his "first view" of the page. 2. Do you consider that it would be preferable to keep the "page change" so i'm increasing the number of page viewed ?
Technical SEO | | alexrbrg0 -
vs.
I have a site that is based in the US but each page has several different versions for different regions. These versions live in folders (/en-us for the US English version, /en-gb for the UK English version, /fr-fr for the French version, etc.). Obviously, the French pages are in French. However, there are two versions of the site that are in English with little variation of the content. The pages all have a tag to indicate the language the page is in. However, there are no <hreflang>tags to indicate that the pages are the same page in two different languages.</hreflang> My question is, do I need to go through and add the <hreflang>tags to each page to reference each other and identify to Google that these are duplicate content issues, but different language versions of the same content? Or, will Google figure that our from the tag?</hreflang>
Technical SEO | | InterCall0 -
Localizing URLs Path - Hreflang
Hello, This is a simple question regarding how URLs should be managed for proper results with the hreflang tags. Right now, we have a website in English and German. The hreflang tag is working properly. This is how we currently have it: https://www.memoq.com/ https://de.memoq.com/ But we will soon change the way we localize our web, moving out of the sub-domain structure. There is this possibility of localizing the URLs path, but I was wondering if the hreflang tag would work in such case. The new structure would look something like: https://www.memoq.com/why-memoq https://www.memoq.com/de/warum-memoQ So my question is: If we localize the keyword in the path of the URL, will the tag still work? Or do they need to be in the same language than the English version. Thanks!
Technical SEO | | Kilgray1 -
Change URL or use Canonicals and Redirects?
We just completed a conclusive a/b test on a client's landing page. The new page saw a 30% bump in conversions, yay! Now what? Option 1: Change the url of the new page to that of the old page, retire the old page. Option 2: Redirect the old page and anything that was pointing to it to the new page, make the new page the canonical. I'm afraid of option 1 because I think Google's WTF penalty will be a bit harsher than option 2, but I wanted to sanity check that here. Any thoughts or experienced advice would be very appreciated!
Technical SEO | | LindsayDayton0 -
Old URLs Appearing in SERPs
Thirteen months ago we removed a large number of non-corporate URLs from our web server. We created 301 redirects and in some cases, we simply removed the content as there was no place to redirect to. Unfortunately, all these pages still appear in Google's SERPs (not Bings) for both the 301'd pages and the pages we removed without redirecting. When you click on the pages in the SERPs that have been redirected - you do get redirected - so we have ruled out any problems with the 301s. We have already resubmitted our XML sitemap and when we run a crawl using Screaming Frog we do not see any of these old pages being linked to at our domain. We have a few different approaches we're considering to get Google to remove these pages from the SERPs and would welcome your input. Remove the 301 redirect entirely so that visits to those pages return a 404 (much easier) or a 410 (would require some setup/configuration via Wordpress). This of course means that anyone visiting those URLs won't be forwarded along, but Google may not drop those redirects from the SERPs otherwise. Request that Google temporarily block those pages (done via GWMT), which lasts for 90 days. Update robots.txt to block access to the redirecting directories. Thank you. Rosemary One year ago I removed a whole lot of junk that was on my web server but it is still appearing in the SERPs.
Technical SEO | | RosemaryB3 -
SEO URLs?
What are the best practices for generating SEO-friendly headlines? dashes between words? underscores between words? etc. Looking for a programatically generated solution that's using editor-written headlines to produce an SEO-friendly URL Thanks.
Technical SEO | | ShaneHolladay0 -
HTACCESS redirect vs. forwarding
I'm having trouble using htaccess redirect to redirect a subdomain to a new domain on a different server. Tech support at godaddy suggested I forward the subdomain. The subdomain has already been cached by google. Will forwarding in this way have the same affect (SEO wise) as an htaccess redirect??
Technical SEO | | triple90 -
Ok to Put a Decimal in a URL?
I'm in the process of creating new product specific URLs for my company. Some of our products have decimals in them for their names as a unit of measurement. For example - .The URL for a 050" widget would be something like: http://www.example.com/product/category/.050-inch-widget My question is - Can I use a decimal in the URL without ticking off the search engines, and/or causing any other unexpected effects?
Technical SEO | | CodyWheeler0