Meta robots or robot.txt file?
-
Hi Mozzers!
For parametric URL's would you recommend meta robot or robot.txt file?
For example: http://www.exmaple.com//category/product/cat no./quickView I want to stop indexing /quickView URLs.And what's the real difference between the two?
Thanks again!
Kay
-
No problem at all
-Andy
-
Thanks Andy!!!
-
Hi Kay,
If you want to disallow access to a page, then add the following to the Robots.txt file:
Disallow: /quickView
Then test this in Webmaster Tools.
If you want to tell Google not to index a page, then you need to do this at the page level using Meta Robots. However, don't do both (at least not at the same time). If you disallow access to a set of pages via Robots.txt and then at a later stage you Meta Noindex, Google won't see this because of the Disallow in the Robots.txt.
It really depends what you are trying to achieve, but it sounds like the Meta Robots is the way to go for you.
-Edit... here is an interesting read for you.
-Andy
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is your experience so far, with the new Google's Meta Description length up to 320 characters?
I updated a few home pages and some landing pages, so far so good! Although, I wish to know about other experiences, before continue updating. Thanks for your comments!
Intermediate & Advanced SEO | | Mª Verónica B.2 -
What to do about new meta description character limit?
Hey Everyone, So as I'm sure everyone has heard the new meta description limits have been increased and I have been going back and forth with whether or not to update my descriptions for my client's websites. I know that Google is now dynamically generating descriptions based off of content, but is it still beneficial to write longer descriptions as well? Will Google display my longer description now if it is ranking well? Rand Fishkin at Moz and others say that you should and other people, including Danny Sullivan, have said you shouldn't worry about it and leave them at between 150-160. My questions is what should I do? I will be focusing heavily on making sure the site's content is very targeted and relevant for when Google dynamically generates the descriptions, but should I still edit my descriptions manually?
Intermediate & Advanced SEO | | kenturley2 -
Tool to identify if meta description are showing?
Hi we have a Ecommerce client with 1000s of meta descriptions, we have noticed that some meta descriptions are not showing properly, we want to pull and see which ones are showing on Google SERP results. You can use tools like screaming frog to pull meta description from page, but we want to see if it's showing for certain keywords. Any ideas on how to automate this? Cheers.
Intermediate & Advanced SEO | | brianna00 -
How important is the file extension in the URL for images?
I know that descriptive image file names are important for SEO. But how important is it to include .png, .jpg, .gif (or whatever file extension) in the url path? i.e. https://example.com/images/golden-retriever vs. https://example.com/images/golden-retriever.jpg Furthermore, since you can set the filename in the Content-Disposition response header, is there any need to include the descriptive filename in the URL path? Since I'm pulling most of our images from a database, it'd be much simpler to not care about simulating a filename, and just reference an image id in my templates. Example: 1. Browser requests GET /images/123456
Intermediate & Advanced SEO | | dsbud
2. Server responds with image setting both Content-Disposition, and Link (canonical) headers Content-Disposition: inline; filename="golden-retriever"
Link: <https: 123456="" example.com="" images="">; rel="canonical"</https:>1 -
Measure impact from new meta descriptions
Hi guys, I'm looking to implement new meta descriptions across a site and i want to measure the impact. So far I'm thinking of extracting the CTR data from GWT for the last 90 days to get the most accurate CTR averages for each URL. Then once the new meta descriptions have been implemented, compare the CTR with the old CTR averages accross URLs. Do you think this would be the most accurate way of measuring the impact? Cheers, Chris
Intermediate & Advanced SEO | | jayoliverwright1 -
Would changing the file name of an image (not the alt attribute) have an effect of on seo / ranking of that image and thus the site?
Would changing the file name of image, not the alt attribute nor the image itself (so it would be exactly the same but just a name change) have any effect on : a) A sites seo ranking b) the individual images seo ranking (although i guess if b) would be true it would have an effect on a) although potentially small.) This is the sort of change i would be thinking of making : ![Red ford truck](2554.jpg) changed to ![Red ford truck](6842.jpg)
Intermediate & Advanced SEO | | Sam-P0 -
Robots.txt error message in Google Webmaster from a later date than the page was cached, how is that?
I have error messages in Google Webmaster that state that Googlebot encountered errors while attempting to access the robots.txt. The last date that this was reported was on December 25, 2012 (Merry Christmas), but the last cache date was November 16, 2012 (http://webcache.googleusercontent.com/search?q=cache%3Awww.etundra.com/robots.txt&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a). How could I get this error if the page hasn't been cached since November 16, 2012?
Intermediate & Advanced SEO | | eTundra0 -
Will blocking urls in robots.txt void out any backlink benefits? - I'll explain...
Ok... So I add tracking parameters to some of my social media campaigns but block those parameters via robots.txt. This helps avoid duplicate content issues (Yes, I do also have correct canonical tags added)... but my question is -- Does this cause me to miss out on any backlink magic coming my way from these articles, posts or links? Example url: www.mysite.com/subject/?tracking-info-goes-here-1234 Canonical tag is: www.mysite.com/subject/ I'm blocking anything with "?tracking-info-goes-here" via robots.txt The url with the tracking info of course IS NOT indexed in Google but IT IS indexed without the tracking parameters. What are your thoughts? Should I nix the robots.txt stuff since I already have the canonical tag in place? Do you think I'm getting the backlink "juice" from all the links with the tracking parameter? What would you do? Why? Are you sure? 🙂
Intermediate & Advanced SEO | | AubieJon0