Does It Really Matter to Restrict Dynamic URLs by Robots.txt?
-
Today, I was checking Google webmaster tools and found that, there are 117 dynamic URLs are restrict by Robots.txt. I have added following syntax in my Robots.txt You can get more idea by following excel sheet.
#Dynamic URLs
Disallow: /?osCsidDisallow: /?q=
Disallow: /?dir=Disallow: /?p=
Disallow: /*?limit=
Disallow: /*review-form
I have concern for following kind of pages.
Shorting by specification:
http://www.vistastores.com/table-lamps?dir=asc&order=name
Iterms per page:
http://www.vistastores.com/table-lamps?dir=asc&limit=60&order=name
Numbering page of products:
http://www.vistastores.com/table-lamps?p=2
Will it create resistance in organic performance of my category pages?
-
I am quite late to add my reply on this question. Because, I was busy to fix issue regarding dynamic URLs.
I have made following changes on my website.
- I have re-write all dynamic URLs and make it static one exclude session ID and internal search option. Because, I have restricted both version via Robots.txt.
- I have set canonical to near duplicate pages which Dr.Pete described in Duplicate content in post panda world.
I want to give one live example to know more about it.
Base URL: http://www.vistastores.com/patio-umbrellas
Dynamic URLs: It was dynamic but, I have re-write to make it static one. But canonical tag to base URL is available on each near duplicate pages which are as follow.
http://www.vistastores.com/patio-umbrellas/shopby/limit-100
http://www.vistastores.com/patio-umbrellas/shopby/lift-method-search-manual-lift
http://www.vistastores.com/patio-umbrellas/shopby/manufacturer-fiberbuilt-umbrellas-llc
http://www.vistastores.com/patio-umbrellas/shopby/price-2,100
http://www.vistastores.com/patio-umbrellas/shopby/canopy-fabric-search-sunbrella
http://www.vistastores.com/patio-umbrellas/shopby/canopy-shape-search-hexagonal
http://www.vistastores.com/patio-umbrellas/shopby/canopy-size-search-7-ft-to-8-ft
http://www.vistastores.com/patio-umbrellas/shopby/color-search-blue
http://www.vistastores.com/patio-umbrellas/shopby/finish-search-black
http://www.vistastores.com/patio-umbrellas/shopby/p-2
http://www.vistastores.com/patio-umbrellas/shopby/dir-desc/order-positionNow, I am looking forward towards Google crawling and How Google treat all canonical pages. I am quite excited to see changes in organic ranking with distribution of page rank in website. Thanks for your insightful reply.
-
Robots.txt isn't the best solution for dynamic URLs. Depending on the type of URL, there are a number of other solutions available.
1. As blurbpoint mentions, Google Webmaster Tools allows you to specify URL handling. They actually do a decent job of this automatically, but also allow you the option to change the settings yourself.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687
2. Identical pages with different parameters can create duplicate content, which is often best handled with canonical tags.
3. Parameters that result in pagination may require slightly nuanced solutions. I won't get into them all here but Adam Audette gives a good overview of pagination solutions here: http://searchengineland.com/the-latest-greatest-on-seo-pagination-114284
Hope this helps. Best of luck with your SEO!
-
Hi,
Instead of blocking those URLs, You can use "URL parameter" setting in Google webmaster tool. You will get parameters like "?dir" & "?p" in it, select appropriate option from that like what actually happens when this parameter come into picture.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Keywords in URL
I have an ecommerce store and i am using moz to get it into the best seo situation... my question is this..... I want to know how important it is to have the targeted keyword actually in the product page url.... I working on meta title and description which is good, but if i start changing all my product urls, it has major impact on the work i have to do since i would have to redo all my product links in ads, and all my product urls in emails, etc. So how much of a part do the urls play in seo?
Intermediate & Advanced SEO | | Bkhoward20010 -
How should you determine the preferred URL structure?
Hi Guys, When migrating to a new CMS which include new pages how should you determine the URL structure, specifically: So should we include www. or without it? Should the URL have a trailing slash? How would you determine the answer to these questions? Cheers.
Intermediate & Advanced SEO | | kayl870 -
When the site's entire URL structure changed, should we update the inbound links built pointing to the old URLs?
We're changing our website's URL structures, this means all our site URLs will be changed. After this is done, do we need to update the old inbound external links to point to the new URLs? Yes the old URLs will be 301 redirected to the new URLs too. Many thanks!
Intermediate & Advanced SEO | | Jade1 -
Meta robots or robot.txt file?
Hi Mozzers! For parametric URL's would you recommend meta robot or robot.txt file?
Intermediate & Advanced SEO | | eLab_London
For example: http://www.exmaple.com//category/product/cat no./quickView I want to stop indexing /quickView URLs. And what's the real difference between the two? Thanks again! Kay0 -
Should I use meta noindex and robots.txt disallow?
Hi, we have an alternate "list view" version of every one of our search results pages The list view has its own URL, indicated by a URL parameter I'm concerned about wasting our crawl budget on all these list view pages, which effectively doubles the amount of pages that need crawling When they were first launched, I had the noindex meta tag be placed on all list view pages, but I'm concerned that they are still being crawled Should I therefore go ahead and also apply a robots.txt disallow on that parameter to ensure that no crawling occurs? Or, will Googlebot/Bingbot also stop crawling that page over time? I assume that noindex still means "crawl"... Thanks 🙂
Intermediate & Advanced SEO | | ntcma0 -
Robots.txt vs noindex
I recently started working on a site that has thousands of member pages that are currently robots.txt'd out. Most pages of the site have 1 to 6 links to these member pages, accumulating into what I regard as something of link juice cul-d-sac. The pages themselves have little to no unique content or other relevant search play and for other reasons still want them kept out of search. Wouldn't it be better to "noindex, follow" these pages and remove the robots.txt block from this url type? At least that way Google could crawl these pages and pass the link juice on to still other pages vs flushing it into a black hole. BTW, the site is currently dealing with a hit from Panda 4.0 last month. Thanks! Best... Darcy
Intermediate & Advanced SEO | | 945010 -
New server update + wrong robots.txt = lost SERP rankings
Over the weekend, we updated our store to a new server. Before the switch, we had a robots.txt file on the new server that disallowed its contents from being indexed (we didn't want duplicate pages from both old and new servers). When we finally made the switch, we somehow forgot to remove that robots.txt file, so the new pages weren't indexed. We quickly put our good robots.txt in place, and we submitted a request for a re-crawl of the site. The problem is that many of our search rankings have changed. We were ranking #2 for some keywords, and now we're not showing up at all. Is there anything we can do? Google Webmaster Tools says that the next crawl could take up to weeks! Any suggestions will be much appreciated.
Intermediate & Advanced SEO | | 9Studios0 -
Googlebot crawling partial URLs
Hi guys, I've checked my email this morning and I've got a number of 404 errors over the weekend where Google has tried to crawl some of my existing pages but not found the full URL. Instead of hitting 'domain.com/folder/complete-pagename.php' it's hit 'domain.com/folder/comp'. This is definitely Googlebot/2.1; http://www.google.com/bot.html (66.249.72.53) but I can't find where it would have found only the partial URL. It certainly wasn't on the domain it's crawling and I can't find any links from external sites pointing to us with the incorrect URL. GoogleBot is doing the same thing across a single domain but in different sub-folders. Having checked Webmaster Tools there aren't any hard 404s and the soft ones aren't related and haven't occured since August. I'm really confused as to how this is happening.. Thanks!
Intermediate & Advanced SEO | | panini0