Is Googlebot ignoring directives? Or is it Me?
-
I saw an answer to a question in this forum a few days ago, that said it was a bad idea to use robots.txt to tell googlebot to go away.
That SEO said it was much better to use the META tag to say noindex,nofollow.
So I removed the robots directive and added the META tag
<meta robots='noindex,nofollow'>
Today, I see google showing my send to a friend page where I expected the real page to be.
Does it mean Google is stupid?
Does it mean google ignores the Robots META tag?
Does it mean short pages have more value than long pages?
Does it mean if I convert my whole site to snippets, I'll get more traffic?
Does it mean garbage trumps content?
I have more questions, but this is more than enough.
-
Thank you Ryan.
They completely ignored the meta tags., completely messing up our serps. So I put it back in robots. I wont trust google again to do the right thing.
-
Hi Allan,
It is a best practice to use meta tags to indicate your indexing preference to search engines.
Normally the recommended implementation would be "noindex, follow" but without examining your site it is impossible to know for sure.
Google honors meta tags but there are a number of issues which could be the source of your issue. For example, if you did not use valid syntax the tag may not be honored. If you are blocking the page in robots.txt, then search engines cannot read the tag.
As for the last three questions, the simple answer is quality content is best.
If you can share the URL of the page involved, we can offer a specific response to the implementation of the meta tag.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
Googlebot crawl error Javascript method is not defined
Hi All, I have this problem, that has been a pain in the ****. I get tons of crawl errors from "Googlebot" saying a specific Javascript method does not exist in my logs. I then go to the affected page and test in a web browser and the page works without any Javascript errors. Can some help with resolving this issue? Thanks in advance.
Technical SEO | | FreddyKgapza0 -
Case study re-directing one site into another?
Hi there, We have access to a third party site that has high domain authority and we want to know if anyone has a case study demonstrating what happens when you 301-redirect a high DA site to another high DA site. In particular, we are wondering what kind of lift the site saw from the additional link equity?
Technical SEO | | nicole.healthline0 -
How are Server side redirects perceived compared to direct links (on a Directory site)
Hi, Im creating some listings for a client on a relevant b2b directory (a good quality directory) I asked if the links are 'followed' or no 'followed' and they said they are 'server side redirects' so no direct links. Does anyone know how these are likely to be perceived by Google ? All BEst Dan
Technical SEO | | Dan-Lawrence1 -
Weird Blog tags and re-directs
Hello fellow Digital Marketeers! As an in-house kinda guy, I rarely get to audit sites other than my own. But, I was tasked with auditing another. So I ran it through Screaming Frog and the usual tools. I got a couple of URLs come back with timeout messages, so I checked them manually- they're apparently part of a blog's archive: http://www.bestpracticegroup.com/tag/training-2/ I click 'read more' and it takes you to: http://www.bestpracticegroup.com/pfi-contracts-3-myth-busters-to-help-achieve-savings/ The first URL seems entirely redundant. Has anyone else seen something like this? Just an explanation as to why something like that would exist, and how you'd handle that would be grand! Much appreciated, John.
Technical SEO | | Muhammad-Isap0 -
SEOMoz Crawler vs Googlebot Question
I read somewhere that SEOMoz’s crawler marks a page in its Crawl Diagnostics as duplicate content if it doesn’t have more than 5% unique content.(I can’t find that statistic anywhere on SEOMoz to confirm though). We are an eCommerce site, so many of our pages share the same sidebar, header, and footer links. The pages flagged by SEOMoz as duplicates have these same links, but they have unique URLs and category names. Because they’re not actual duplicates of each other, canonical tags aren’t the answer. Also because inventory might automatically come back in stock, we can’t use 301 redirects on these “duplicate” pages. It seems like it’s the sidebar, header, and footer links that are what’s causing these pages to be flagged as duplicates. Does the SEOMoz crawler mimic the way Googlebot works? Also, is Googlebot smart enough not to count the sidebar and header/footer links when looking for duplicate content?
Technical SEO | | ElDude0 -
Problem with re-direction
Please help me to solve a problem with redirection. I re-do the
Technical SEO | | NadiaFL
site and move it to new domain page-by-page as recommended. I use WordPress Redirect plugin. I did everything as written 10
days ago but don't see redirection. For example, old page http://aurora17.com/?page_id=2485 New page http://njcruise.org/alaska-cruise-tour/ Where is a problem? How to solve it?0 -
Is there a penalty for too many 301 re-directs?
We are thinking of restructuring the URLs on our site and I was wondering if there is a penalty associated with setting up so many 301 re-directs.
Technical SEO | | nicole.healthline0