Why should I add URL parameters where Meta Robots NOINDEX available?
-
Today, I have checked Bing webmaster tools and come to know about Ignore URL parameters.
Bing webmaster tools shows me certain parameters for URLs where I have added META Robots with NOINDEX FOLLOW syntax.
I can see canopy_search_fabric parameter in suggested section. It's due to following kind or URLs.
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1728
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1729
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1730
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=2239
But, I have added META Robots NOINDEX Follow to disallow crawling. So, why should it happen?
-
This is good for me... Let me drill down more on that article.... I'll check in Google webmaster tools before make it live on server... So, It may help me more to achieve 100% perfection in task!
-
Don't disallow: /*?
because that may well disallow everything - you will need to be more specific than that.
Read that whole article on pattern matching and then do a search for 'robots.txt pattern matching' and you will find some examples so you can follow something based on others' experiences.
-
I hope, following one is for me... Right?
Disallow: /*?
-
I suggest then you use pattern matching in order to restrict which parameters you don't want to be crawled.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449
-
I'm agree to deal with Robots.txt. But, my website have 1000+ attributes for narrow by search & I don't want to disallow all dynamic pages by Robots.txt.
Will it flexible for me to handle? And answer is no!
What you think about it?
-
I'd say the first thing to say is that NOINDEX is an assertion on your part that the pages should not be indexed. Search Bots have the ability to ignore your instruction - it should be rare that they do ignore it, but it's not beyond the realms of probability.
What I would do in your position is add a disallow line to your** robots.txt** to completely disallow access to
/patio-umbrellas?canopy_fabric_search*
That should be more effective if you really don't want these URLs in the index.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to deal with parameter URLs as primary internal links and not canonicals? Weird situation inside...
So I have a weird situation, and I was hoping someone could help. This is for an ecommerce site. 1. Parameters are used to tie Product Detail Pages (PDP) to individual categories. This is represented in the breadcrumbs for the page and the use of a categoryid. One product can thus be included in multiple categories. 2. All of these PDPs have a canonical that does not include the parameter / categoryid. 3. With very few exceptions, the canonical URL for the PDPs are not linked to. Instead, the parameter URL is to tie it to a specific category. This is done primarily for the sake of breadcrumbs it seems. One of the big issues we've been having is the canonical URLs not being indexed for a lot of the products. In some instances, the canonicals _are _indexed alongside parameters, or just parameter URLs are indexed. It's all very...mixed up, I suppose. My theory is that the majority of canonical URLs not being linked to anywhere on the site is forcing Google to put preference on the internal link instead. My problem? **I have no idea what to recommend to the client (who will not change the parameter setup). ** One of our Technical SEOs recommended we "Use cookies instead of parameters to assign breadcrumbs based on how the PDP is accessed." I have no experience this. So....yeah. Any thoughts? Suggestions? Thanks in advance.
Intermediate & Advanced SEO | | Alces0 -
Should I disallow all URL query strings/parameters in Robots.txt?
Webmaster Tools correctly identifies the query strings/parameters used in my URLs, but still reports duplicate title tags and meta descriptions for the original URL and the versions with parameters. For example, Webmaster Tools would report duplicates for the following URLs, despite it correctly identifying the "cat_id" and "kw" parameters: /Mulligan-Practitioner-CD-ROM
Intermediate & Advanced SEO | | jmorehouse
/Mulligan-Practitioner-CD-ROM?cat_id=87
/Mulligan-Practitioner-CD-ROM?kw=CROM Additionally, theses pages have self-referential canonical tags, so I would think I'd be covered, but I recently read that another Mozzer saw a great improvement after disallowing all query/parameter URLs, despite Webmaster Tools not reporting any errors. As I see it, I have two options: Manually tell Google that these parameters have no effect on page content via the URL Parameters section in Webmaster Tools (in case Google is unable to automatically detect this, and I am being penalized as a result). Add "Disallow: *?" to hide all query/parameter URLs from Google. My concern here is that most backlinks include the parameters, and in some cases these parameter URLs outrank the original. Any thoughts?0 -
Replicating keywords in the URL - bad?
Our site URL structure used to be (example site) frogsforsale.com/cute-frogs-for-sale/blue-frogs wherefrogsforsale.com/cute-frogs-for-sale/ was in front of every URL on the site. We changed it by removing the for-sale part of the URL to be frogsforsale.com/cute-frogs/blue-frogs. Would that have hurt our rankings and traffic by removing the for-sale? Or was having for-sale in the URL twice (once in domain, again in URL) hurting our site? The business wants to change the URLs again to put for-sale back in, but in a new spot such as frogsforsale.com/cute-frogs/blue-frogs-for-sale as they are convinced that is the cause of the rankings and traffic drop. However the entire site was redesigned at the same time, the site architecture is very different, so it is very hard to say whether the traffic drop is due to this or not.
Intermediate & Advanced SEO | | CFSSEO0 -
Can I, in Google's good graces, check for Googlebot to turn on/off tracking parameters in URLs?
Basically, we use a number of parameters in our URLs for event tracking. Google could be crawling an infinite number of these URLs. I'm already using the canonical tag to point at the non-tracking versions of those URLs....that doesn't stop the crawling tho. I want to know if I can do conditional 301s or just detect the user agent as a way to know when to NOT append those parameters. Just trying to follow their guidelines about allowing bots to crawl w/out things like sessionID...but they don't tell you HOW to do this. Thanks!
Intermediate & Advanced SEO | | KenShafer0 -
How important is it to canonicalize mobile URLs to desktop URLs?
I know many SEO's prefer a stylesheet and single URL, but if you use m.domain.com, do you canonicalize to your desktop URLS?
Intermediate & Advanced SEO | | nicole.healthline0 -
Could this URL issue be affecting our rankings?
Hi everyone, I have been building links to a site for a while now and we're struggling to get page 1 results for their desired keywords. We're wondering if a web development / URL structure issue could be to blame in what's holding it back. The way the site's been built means that there's a 'false' 1st-level in the URL structure. We're building deeplinks to the following page: www.example.com/blue-widgets/blue-widget-overview However, if you chop off the 2nd-level, you're not given a category page, it's a 404: www.example.com/blue-widgets/ - [Brings up a 404] I'm assuming the web developer built the site and URL structure this way just for the purposes of getting additional keywords in the URL. What's worse is that there is very little consistency across other products/services. Other pages/URLs include: www.example.com/green-widgets/widgets-in-green www.example.com/red-widgets/red-widget-intro-page www.example.com/yellow-widgets/yellow-widgets I'm wondering if Google is aware of these 'false' pages* and if so, if we should advise the client to change the URLs and therefore the URL structure of the website. This is bearing in mind that these pages haven't been linked to (because they don't exist) and therefore aren't being indexed by Google. I'm just wondering if Google can determine good/bad URL etiquette based on other parts of the URL, i.e. the fact that that middle bit doesn't exist. As a matter of fact, my colleague Steve asked this question on a blog post that Dr. Pete had written. Here's a link to Steve's comment - there are 2 replies below, one of which argues that this has no implication whatsoever. However, 5 months on, it's still an issue for us so it has me wondering... Many thanks!
Intermediate & Advanced SEO | | Gmorgan0 -
Can you use more than one meta robots tag per page?
If you want to add both "noindex, follow" and "noopd" should you add two meta robots tags or is there a way to combine both into one?
Intermediate & Advanced SEO | | nicole.healthline0 -
Googlebot + Meta-Refresh
Quick question, can Googlebot (or other search engines) follow meta refresh tags? Does it work anything like a 301 in terms of passing value to the new page?
Intermediate & Advanced SEO | | kchandler1