Why should I add URL parameters where Meta Robots NOINDEX available?
-
Today, I have checked Bing webmaster tools and come to know about Ignore URL parameters.
Bing webmaster tools shows me certain parameters for URLs where I have added META Robots with NOINDEX FOLLOW syntax.
I can see canopy_search_fabric parameter in suggested section. It's due to following kind or URLs.
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1728
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1729
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1730
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=2239
But, I have added META Robots NOINDEX Follow to disallow crawling. So, why should it happen?
-
This is good for me... Let me drill down more on that article.... I'll check in Google webmaster tools before make it live on server... So, It may help me more to achieve 100% perfection in task!
-
Don't disallow: /*?
because that may well disallow everything - you will need to be more specific than that.
Read that whole article on pattern matching and then do a search for 'robots.txt pattern matching' and you will find some examples so you can follow something based on others' experiences.
-
I hope, following one is for me... Right?
Disallow: /*?
-
I suggest then you use pattern matching in order to restrict which parameters you don't want to be crawled.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449
-
I'm agree to deal with Robots.txt. But, my website have 1000+ attributes for narrow by search & I don't want to disallow all dynamic pages by Robots.txt.
Will it flexible for me to handle? And answer is no!
What you think about it?
-
I'd say the first thing to say is that NOINDEX is an assertion on your part that the pages should not be indexed. Search Bots have the ability to ignore your instruction - it should be rare that they do ignore it, but it's not beyond the realms of probability.
What I would do in your position is add a disallow line to your** robots.txt** to completely disallow access to
/patio-umbrellas?canopy_fabric_search*
That should be more effective if you really don't want these URLs in the index.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sanity Check: NoIndexing a Boatload of URLs
Hi, I'm working with a Shopify site that has about 10x more URLs in Google's index than it really ought to. This equals thousands of urls bloating the index. Shopify makes it super easy to make endless new collections of products, where none of the new collections has any new content... just a new mix of products. Over time, this makes for a ton of duplicate content. My response, aside from making other new/unique content, is to select some choice collections with KW/topic opportunities in organic and add unique content to those pages. At the same time, noindexing the other 90% of excess collections pages. The thing is there's evidently no method that I could find of just uploading a list of urls to Shopify to tag noindex. And, it's too time consuming to do this one url at a time, so I wrote a little script to add a noindex tag (not nofollow) to pages that share various identical title tags, since many of them do. This saves some time, but I have to be careful to not inadvertently noindex a page I want to keep. Here are my questions: Is this what you would do? To me it seems a little crazy that I have to do this by title tag, although faster than one at a time. Would you follow it up with a deindex request (one url at a time) with Google or just let Google figure it out over time? Are there any potential negative side effects from noindexing 90% of what Google is already aware of? Any additional ideas? Thanks! Best... Mike
Intermediate & Advanced SEO | | 945010 -
Replicating keywords in the URL - bad?
Our site URL structure used to be (example site) frogsforsale.com/cute-frogs-for-sale/blue-frogs wherefrogsforsale.com/cute-frogs-for-sale/ was in front of every URL on the site. We changed it by removing the for-sale part of the URL to be frogsforsale.com/cute-frogs/blue-frogs. Would that have hurt our rankings and traffic by removing the for-sale? Or was having for-sale in the URL twice (once in domain, again in URL) hurting our site? The business wants to change the URLs again to put for-sale back in, but in a new spot such as frogsforsale.com/cute-frogs/blue-frogs-for-sale as they are convinced that is the cause of the rankings and traffic drop. However the entire site was redesigned at the same time, the site architecture is very different, so it is very hard to say whether the traffic drop is due to this or not.
Intermediate & Advanced SEO | | CFSSEO0 -
Why is this url redirecting to our site?
I was doing an audit on our site and searching for duplicate content using some different terms from each of our pages. I came across the following result: www.sswug.org/url/32639 redirects to our website. Is that normal? There are hundreds of these url's in google all with the exact same description. I thought it was odd. Any ideas and what is the consequence of this?
Intermediate & Advanced SEO | | Sika220 -
Noindex search pages?
Is it best to noindex search results pages, exclude them using robots.txt, or both?
Intermediate & Advanced SEO | | YairSpolter0 -
2 URLS pointing to the same content
Hi, We currently have 2 URL's pointing to the same website (long story why we have it) - A & B. A is our main website but we set up B as a rewrite URL to use for our Pay Per Click campaign. Now because its the same site, but B is just a URL rewrite, Google Webmaster Tools is seeing that we have thousands of links coming in from site B to site A. I want to tell Google to ignore site B url but worried it might affect site A. I can't add a no follow link on site B as its the same content so will also be applicable on Site A. I'm also worried about using Google Disavow as it might impact on site A! Can anyone make any suggestions on what to do, as I would like to hear from anyone with experience with this or can recommend a safe option. Thanks for your time!
Intermediate & Advanced SEO | | Party_Experts0 -
Meta canonical or simply robots.txt other domain names with same content?
Hi, I'm working with a new client who has a main product website. This client has representatives who also sells the same products but all those reps have a copy of the same website on another domain name. The best thing would probably be to shut down the other (same) websites and redirect 301 them to the main, but that's impossible in the minding of the client. First choice : Implement a conical meta for all the URL on all the other domain names. Second choice : Robots.txt with disallow for all the other websites. Third choice : I'm really open to other suggestions 😉 Thank you very much! 🙂
Intermediate & Advanced SEO | | Louis-Philippe_Dea0 -
Does URL format affect Keyword effectiveness for a URL?
I am looking at our site structure, and don't want to have to rebuild the way the site was linked together based on it's current folder structure so I am wondering what option would work better for our URL structure. I will uses car categories as an example of what I am talking about, but you can insert any category structure you like. For example I would like to have pages like this: www.example.com/ford-convertibles
Intermediate & Advanced SEO | | SL_SEM
www.example.com/chevy-convertibles But instead due to the site structure I will need to have pages like this: www.example.com/ford/convertibles
www.example.com/chevy/convertibles But wonder if I shouldn't do the following to ensure the proper phrase is known for the page: www.example.com/ford/ford-convertibles
www.example.com/chevy/chevy-convertibles The "/ford/ford-convertibles" just seems odd to me as a human, but I haven't seen anything on how well a keyphrase in a URL split by /'s does and I know dashes for phrases are fine. This means I am inclined to go with the"/ford/ford-convertibles"style because it keeps the keyphrase separated by dashes even if it is a bit repetitive. There will be other pages too like "/ford/top-10-fords-ever" but I don't wonder about that since it isnt "ford/ford-xxxxx" Thoughts on whether /'s in a keyphrase are as good as dashes?0 -
400 errors and URL parameters in Google Webmaster Tools
On our website we do a lot of dynamic resizing of images by using a script which automatically re-sizes an image dependant on paramaters in the URL like: www.mysite.com/images/1234.jpg?width=100&height=200&cut=false In webmaster tools I have noticed there are a lot of 400 errors on these image Also when I click the URL's listed as causing the errors the URL's are URL Encoded and go to pages like this (this give a bad request): www.mysite.com/images/1234.jpg?%3Fwidth%3D100%26height%3D200%26cut%3Dfalse What are your thoughts on what I should do to stop this? I notice in my webmaster tools "URL Parameters" there are parameters for:
Intermediate & Advanced SEO | | James77
height
width
cut which must be from the Image URLs. These are currently set to "Let Google Decide", but should I change them manually to "Doesn't effect page content"? Thanks in advance0