Does It Really Matter to Restrict Dynamic URLs by Robots.txt?
-
Today, I was checking Google webmaster tools and found that, there are 117 dynamic URLs are restrict by Robots.txt. I have added following syntax in my Robots.txt You can get more idea by following excel sheet.
#Dynamic URLs
Disallow: /?osCsidDisallow: /?q=
Disallow: /?dir=Disallow: /?p=
Disallow: /*?limit=
Disallow: /*review-form
I have concern for following kind of pages.
Shorting by specification:
http://www.vistastores.com/table-lamps?dir=asc&order=name
Iterms per page:
http://www.vistastores.com/table-lamps?dir=asc&limit=60&order=name
Numbering page of products:
http://www.vistastores.com/table-lamps?p=2
Will it create resistance in organic performance of my category pages?
-
I am quite late to add my reply on this question. Because, I was busy to fix issue regarding dynamic URLs.
I have made following changes on my website.
- I have re-write all dynamic URLs and make it static one exclude session ID and internal search option. Because, I have restricted both version via Robots.txt.
- I have set canonical to near duplicate pages which Dr.Pete described in Duplicate content in post panda world.
I want to give one live example to know more about it.
Base URL: http://www.vistastores.com/patio-umbrellas
Dynamic URLs: It was dynamic but, I have re-write to make it static one. But canonical tag to base URL is available on each near duplicate pages which are as follow.
http://www.vistastores.com/patio-umbrellas/shopby/limit-100
http://www.vistastores.com/patio-umbrellas/shopby/lift-method-search-manual-lift
http://www.vistastores.com/patio-umbrellas/shopby/manufacturer-fiberbuilt-umbrellas-llc
http://www.vistastores.com/patio-umbrellas/shopby/price-2,100
http://www.vistastores.com/patio-umbrellas/shopby/canopy-fabric-search-sunbrella
http://www.vistastores.com/patio-umbrellas/shopby/canopy-shape-search-hexagonal
http://www.vistastores.com/patio-umbrellas/shopby/canopy-size-search-7-ft-to-8-ft
http://www.vistastores.com/patio-umbrellas/shopby/color-search-blue
http://www.vistastores.com/patio-umbrellas/shopby/finish-search-black
http://www.vistastores.com/patio-umbrellas/shopby/p-2
http://www.vistastores.com/patio-umbrellas/shopby/dir-desc/order-positionNow, I am looking forward towards Google crawling and How Google treat all canonical pages. I am quite excited to see changes in organic ranking with distribution of page rank in website. Thanks for your insightful reply.
-
Robots.txt isn't the best solution for dynamic URLs. Depending on the type of URL, there are a number of other solutions available.
1. As blurbpoint mentions, Google Webmaster Tools allows you to specify URL handling. They actually do a decent job of this automatically, but also allow you the option to change the settings yourself.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687
2. Identical pages with different parameters can create duplicate content, which is often best handled with canonical tags.
3. Parameters that result in pagination may require slightly nuanced solutions. I won't get into them all here but Adam Audette gives a good overview of pagination solutions here: http://searchengineland.com/the-latest-greatest-on-seo-pagination-114284
Hope this helps. Best of luck with your SEO!
-
Hi,
Instead of blocking those URLs, You can use "URL parameter" setting in Google webmaster tool. You will get parameters like "?dir" & "?p" in it, select appropriate option from that like what actually happens when this parameter come into picture.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt Help
I need help to create robots.txt file. Please let me know what to add in the file. any real example or working example.?
Intermediate & Advanced SEO | | Michael.Leonard0 -
If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?
Hi MOZers, This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers? A) would Google totally ignore the image and the ALT tags information? OR B) Google would consider the ALT tags information? I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently. Looking forward to all your responses 🙂 Malika
Intermediate & Advanced SEO | | Malika11 -
Robots.txt - Googlebot - Allow... what's it for?
Hello - I just came across this in robots.txt for the first time, and was wondering why it is used? Why would you have to proactively tell Googlebot to crawl JS/CSS and why would you want it to? Any help would be much appreciated - thanks, Luke User-Agent: Googlebot Allow: /.js Allow: /.css
Intermediate & Advanced SEO | | McTaggart0 -
Robots.txt Blocking - Best Practices
Hi All, We have a web provider who's not willing to remove the wildcard line of code blocking all agents from crawling our client's site (user-agent: *, Disallow: /). They have other lines allowing certain bots to crawl the site but we're wondering if they're missing out on organic traffic by having this main blocking line. It's also a pain because we're unable to set up Moz Pro, potentially because of this first line. We've researched and haven't found a ton of best practices regarding blocking all bots, then allowing certain ones. What do you think is a best practice for these files? Thanks! User-agent: * Disallow: / User-agent: Googlebot Disallow: Crawl-delay: 5 User-agent: Yahoo-slurp Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: * Crawl-delay: 5 Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp
Intermediate & Advanced SEO | | ReunionMarketing0 -
Why is this url redirecting to our site?
I was doing an audit on our site and searching for duplicate content using some different terms from each of our pages. I came across the following result: www.sswug.org/url/32639 redirects to our website. Is that normal? There are hundreds of these url's in google all with the exact same description. I thought it was odd. Any ideas and what is the consequence of this?
Intermediate & Advanced SEO | | Sika220 -
Huge increase in server errors and robots.txt
Hi Moz community! Wondering if someone can help? One of my clients (online fashion retailer) has been receiving huge increase in server errors (500's and 503's) over the last 6 weeks and it has got to the point where people cannot access the site because of server errors. The client has recently changed hosting companies to deal with this, and they have just told us they removed the DNS records once the name servers were changed, and they have now fixed this and are waiting for the name servers to propagate again. These errors also correlate with a huge decrease in pages blocked by robots.txt file, which makes me think someone has perhaps changed this and not told anyone... Anyone have any ideas here? It would be greatly appreciated! 🙂 I've been chasing this up with the dev agency and the hosting company for weeks, to no avail. Massive thanks in advance 🙂
Intermediate & Advanced SEO | | labelPR0 -
Does Google index url with hashtags?
We are setting up some Jquery tabs in a page that will produce the same url with hashtags. For example: index.php#aboutus, index.php#ourguarantee, etc. We don't want that content to be crawled as we'd like to prevent duplicate content. Does Google normally crawl such urls or does it just ignore them? Thanks in advance.
Intermediate & Advanced SEO | | seoppc20120 -
Block all but one URL in a directory using robots.txt?
Is it possible to block all but one URL with robots.txt? for example domain.com/subfolder/example.html, if we block the /subfolder/ directory we want all URLs except for the exact match url domain.com/subfolder to be blocked.
Intermediate & Advanced SEO | | nicole.healthline0