Does It Really Matter to Restrict Dynamic URLs by Robots.txt?
-
Today, I was checking Google webmaster tools and found that, there are 117 dynamic URLs are restrict by Robots.txt. I have added following syntax in my Robots.txt You can get more idea by following excel sheet.
#Dynamic URLs
Disallow: /?osCsidDisallow: /?q=
Disallow: /?dir=Disallow: /?p=
Disallow: /*?limit=
Disallow: /*review-form
I have concern for following kind of pages.
Shorting by specification:
http://www.vistastores.com/table-lamps?dir=asc&order=name
Iterms per page:
http://www.vistastores.com/table-lamps?dir=asc&limit=60&order=name
Numbering page of products:
http://www.vistastores.com/table-lamps?p=2
Will it create resistance in organic performance of my category pages?
-
I am quite late to add my reply on this question. Because, I was busy to fix issue regarding dynamic URLs.
I have made following changes on my website.
- I have re-write all dynamic URLs and make it static one exclude session ID and internal search option. Because, I have restricted both version via Robots.txt.
- I have set canonical to near duplicate pages which Dr.Pete described in Duplicate content in post panda world.
I want to give one live example to know more about it.
Base URL: http://www.vistastores.com/patio-umbrellas
Dynamic URLs: It was dynamic but, I have re-write to make it static one. But canonical tag to base URL is available on each near duplicate pages which are as follow.
http://www.vistastores.com/patio-umbrellas/shopby/limit-100
http://www.vistastores.com/patio-umbrellas/shopby/lift-method-search-manual-lift
http://www.vistastores.com/patio-umbrellas/shopby/manufacturer-fiberbuilt-umbrellas-llc
http://www.vistastores.com/patio-umbrellas/shopby/price-2,100
http://www.vistastores.com/patio-umbrellas/shopby/canopy-fabric-search-sunbrella
http://www.vistastores.com/patio-umbrellas/shopby/canopy-shape-search-hexagonal
http://www.vistastores.com/patio-umbrellas/shopby/canopy-size-search-7-ft-to-8-ft
http://www.vistastores.com/patio-umbrellas/shopby/color-search-blue
http://www.vistastores.com/patio-umbrellas/shopby/finish-search-black
http://www.vistastores.com/patio-umbrellas/shopby/p-2
http://www.vistastores.com/patio-umbrellas/shopby/dir-desc/order-positionNow, I am looking forward towards Google crawling and How Google treat all canonical pages. I am quite excited to see changes in organic ranking with distribution of page rank in website. Thanks for your insightful reply.
-
Robots.txt isn't the best solution for dynamic URLs. Depending on the type of URL, there are a number of other solutions available.
1. As blurbpoint mentions, Google Webmaster Tools allows you to specify URL handling. They actually do a decent job of this automatically, but also allow you the option to change the settings yourself.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687
2. Identical pages with different parameters can create duplicate content, which is often best handled with canonical tags.
3. Parameters that result in pagination may require slightly nuanced solutions. I won't get into them all here but Adam Audette gives a good overview of pagination solutions here: http://searchengineland.com/the-latest-greatest-on-seo-pagination-114284
Hope this helps. Best of luck with your SEO!
-
Hi,
Instead of blocking those URLs, You can use "URL parameter" setting in Google webmaster tool. You will get parameters like "?dir" & "?p" in it, select appropriate option from that like what actually happens when this parameter come into picture.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Many New Urls at once
Hi, I have about 5,000 new URLs to publish. For SEO/Google - Should I publish them gradually, or all at once is fine? *By the way - all these URLs were already indexed in the past, but then redirected. Cheers,
Intermediate & Advanced SEO | | viatrading10 -
Need help with Robots.txt
An eCommerce site built with Modx CMS. I found lots of auto generated duplicate page issue on that site. Now I need to disallow some pages from that category. Here is the actual product page url looks like
Intermediate & Advanced SEO | | Nahid
product_listing.php?cat=6857 And here is the auto generated url structure
product_listing.php?cat=6857&cPath=dropship&size=19 Can any one suggest how to disallow this specific category through robots.txt. I am not so familiar with Modx and this kind of link structure. Your help will be appreciated. Thanks1 -
Robots.txt and redirected backlinks
Hey there, since a client's global website has a very complex structure which lead to big duplicate content problems, we decided to disallow crawler access and instead allow access to only a few relevant subdirectories. While indexing has improved since this I was wondering if we might have cut off link juice. Since several backlinks point to the disallowed root directory and are from there redirected (301) to the allowed directory I was wondering if this could cause any problems? Example: If there is a backlink pointing to example.com (disallowed in robots.txt) and is redirected from there to example.com/uk/en (allowed in robots.txt). Would this cut off the link juice? Thanks a lot for your thoughts on this. Regards, Jochen
Intermediate & Advanced SEO | | Online-Marketing-Guy0 -
How to switch from URL based navigation to Ajax, 1000's of URLs gone
Hi everyone, We have thousands of urls generated by numerous products filters on our ecommerce site, eg./category1/category11/brand/color-red/size-xl+xxl/price-cheap/in-stock/. We are thinking of moving these filters to ajax in order to offer a better user experience and get rid of these useless urls. In your opinion, what is the best way to deal with this huge move ? leave the existing URLs respond as before : as they will disappear from our sitemap (they won't be linked anymore), I imagine robots will someday consider them as obsolete ? redirect permanent (301) to the closest existing url mark them as gone (4xx) I'd vote for option 2. Bots will suddenly see thousands of 301, but this is reflecting what is really happening, right ? Do you think this could result in some penalty ? Thank you very much for your help. Jeremy
Intermediate & Advanced SEO | | JeremyICC0 -
E-commerce duplicate URLS
Hi I just realized that my e-commerce products do not have any difference except the SKUS, PRICE and THE PRODUCT name. Apart from each page has the same sidebar and a piece of content ( same ) under each product pages. And this is the reason why i am getting too many duplicate urls warning through Moz analytics. I do not have any other contents to add for each product because of the nature of the product. Only the price, product name and the SKUs will be different and rest will all be same for each products. How can i fix this ? Thanks
Intermediate & Advanced SEO | | MindlessWizard0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
SEO Strategy for URL Change
I'm working with a company who will likely have to change their URL because of a trademark dispute. They will be able to maintain the new URL for some period but will soon need to drop the existing URL all together. Aside from the usual keyword considerations when choosing a URL, are there any SEO strategies I should consider as we execute this change?
Intermediate & Advanced SEO | | Jon_KS0 -
Robots.txt & url removal vs. noindex, follow?
When de-indexing pages from google, what are the pros & cons of each of the below two options: robots.txt & requesting url removal from google webmasters Use the noindex, follow meta tag on all doctor profile pages Keep the URLs in the Sitemap file so that Google will recrawl them and find the noindex meta tag make sure that they're not disallowed by the robots.txt file
Intermediate & Advanced SEO | | nicole.healthline0