Blocking Dynamic URLs with Robots.txt
-
Background:
My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page:
...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page:
http://www.mysite.com/widgets.html?price=1%2C250
http://www.mysite.com/widgets.html?price=2%2C250
http://www.mysite.com/widgets.html?price=3%2C250
As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations.
Question:
-
Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry.
-
To implement, I was going to do the following in Robots.txt:
User-agent: *
Disallow: /*?
Disallow: /*=
....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution?
Thank you!
-
-
If you are happy with any URLs with query strings not being indexed your robots.txt will work fine.
Do any or your URLs with question marks in them have links to them? If so you might want to be careful blocking google from indexing them. I would think you'd lose the benefits those links would pass to your site.
-
Tait,
Thanks for the answer. I think the canonical tag would be ideal, but in terms of implementation, it would require some substantial code modification to the site / PHP code as I have a lot of categories, and adding this manually to each one would be very time consuming.
Would preventing the spiders from indexing any URLs with a "?" or "&" (which would only be dynamic URLs variations) cause any problems? Or is this just not an ideal best practice?
Thanks!
-
I don't know if there's a good solution with robots.txt given your URL structure. However, you could use the rel=canonical link tag in the header to force google to treat many of your URLs the same way. This would help you avoid duplicate content penalties.
More on rel=canonical:
http://www.google.com/support/webmasters/bin/answer.py?answer=139394
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How Does Yelp Create URLs?
Hi all, How does Yelp (or other sites) go about creating URLs for just about every service and city possible ending with the search? in the URL like this https://www.yelp.com/search?cflt=chiropractors&find_loc=West+Palm+Beach%2C+FL. They clearly aren't creating all of these pages, so how do you go about setting a meta title/optimization formula that allows these pages to exist AND to be crawled by search engines and indexed?
Intermediate & Advanced SEO | | RickyShockley0 -
Automatically check if URL has been optimised?
Hi guys, I have a massive list of URLs and want to check if the primary keyword for each URL has been optimised. I'm looking for something similar to Moz on-page grader which grades the URL and primary keyword with a single metric e.g. grade a, b, c However, Moz doesn't offer an API to pull this score automatically. I was wondering does anyone know of any tools which you can access their API to do something like this? Cheers.
Intermediate & Advanced SEO | | jayoliverwright0 -
What do you add to your robots.txt on your ecommerce sites?
We're looking at expanding our robots.txt, we currently don't have the ability to noindex/nofollow. We're thinking about adding the following: Checkout Basket Then possibly: Price Theme Sortby other misc filters. What do you include?
Intermediate & Advanced SEO | | ThomasHarvey0 -
Huge increase in server errors and robots.txt
Hi Moz community! Wondering if someone can help? One of my clients (online fashion retailer) has been receiving huge increase in server errors (500's and 503's) over the last 6 weeks and it has got to the point where people cannot access the site because of server errors. The client has recently changed hosting companies to deal with this, and they have just told us they removed the DNS records once the name servers were changed, and they have now fixed this and are waiting for the name servers to propagate again. These errors also correlate with a huge decrease in pages blocked by robots.txt file, which makes me think someone has perhaps changed this and not told anyone... Anyone have any ideas here? It would be greatly appreciated! 🙂 I've been chasing this up with the dev agency and the hosting company for weeks, to no avail. Massive thanks in advance 🙂
Intermediate & Advanced SEO | | labelPR0 -
SEO within the URL /
If I were optimizing for 'marketing success' and my URL structure was domain.com/marketing/success would that count? I'm not sure if the '/' affects the keyword term. My assumption is that it does, but I wasn't 100% sure. Thanks!
Intermediate & Advanced SEO | | KristinaWitmer0 -
Canonical url issue
Canonical url issue My site https://ladydecosmetic.com on seomoz crawl showing duplicate page title, duplicate page content errors. I have downloaded the error reports csv and checked. From the report, The below url contains duplicate page content.
Intermediate & Advanced SEO | | trixmediainc
https://www.ladydecosmetic.com/unik-colours-lipstick-caribbean-peach-o-27-item-162&category_id=40&brands=66&click=brnd And other duplicate urls as per report are,
https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40&click=colorsu&brands=66 https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40 https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40&brands=66&click=brnd But on every these url(all 4) I have set canonical url. That is the original url and an existing one(not 404). https://www.ladydecosmetic.com/unik-colours-lipstick-caribbean-peach-o-27-item-162&category_id=0 Then how this issues are showing like duplicate page content. Please give me an answer ASAP.0 -
Will blocking urls in robots.txt void out any backlink benefits? - I'll explain...
Ok... So I add tracking parameters to some of my social media campaigns but block those parameters via robots.txt. This helps avoid duplicate content issues (Yes, I do also have correct canonical tags added)... but my question is -- Does this cause me to miss out on any backlink magic coming my way from these articles, posts or links? Example url: www.mysite.com/subject/?tracking-info-goes-here-1234 Canonical tag is: www.mysite.com/subject/ I'm blocking anything with "?tracking-info-goes-here" via robots.txt The url with the tracking info of course IS NOT indexed in Google but IT IS indexed without the tracking parameters. What are your thoughts? Should I nix the robots.txt stuff since I already have the canonical tag in place? Do you think I'm getting the backlink "juice" from all the links with the tracking parameter? What would you do? Why? Are you sure? 🙂
Intermediate & Advanced SEO | | AubieJon0 -
202 error page set in robots.txt versus using crawl-able 404 error
We currently have our error page set up as a 202 page that is unreachable by the search engines as it is currently in our robots.txt file. Should the current error page be a 404 error page and reachable by the search engines? Is there more value or is it a better practice to use 404 over a 202? We noticed in our Google Webmaster account we have a number of broken links pointing the site, but the 404 error page was not accessible. If you have any insight that would be great, if you have any questions please let me know. Thanks, VPSEO
Intermediate & Advanced SEO | | VPSEO0