Robots.txt blocking Metadata description being read
-
I have updated my Robots.txt file recently to fix the description snip-it on Google reading "A description for this result is not available because of this site's robots.txt – learn more". How can I have this updated to read my META data?
-
Hi IMM,
What page are you trying to fix? And what did you change in the robots.txt file to fix it?
-
It sounds like you have something going on like what Matt Cutts talks about here
http://www.youtube.com/watch?v=KBdEwpRQRD0
You have a result showing up in the SERPs, even though it is in robots.txt. Basically, the reason it is still in the SERPs is because other pages are linking to the URL on your page.
I am going to assume that you want to keep these pages out of the index.
As you already have pages in the index, you need to get them removed, not just block them. I would suggest using a noindex meta tag and then letting the crawler crawl the page. The robots.txt stops the bot cold and does not let it read anything else. It does not let the bot read the meta tag. If you let it read the noindex meta tag that tag directs Google to take the page out of the search results.
https://support.google.com/webmasters/answer/93708?hl=en
"When we see a noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it."
That said, if you have made a mistake and have been blocking Google when you did not mean to. Make sure that you do not use the noindex meta tag on those pages, and make sure you are not blocking Google in your robots.txt. If that is the case and you are still seeing the wrong info in the SERPs, you do need to wait a little while. The updates will not be instantaneous and may take a few weeks before being updated in the SERPs. In the mean time, just double check that everything in your robots.txt is correct etc.
Be patient and good luck!
-
I only have pages in the checkout flow blocked. My homepage and other pages should be accessible.
-
If you deny the robot access to the URL then it will not be able to read the meta. There isn't a way around this - as that is the purpose of denying robots. If the robot is allowed then it will update when googlebot recrawls your page and updates its index. How long that takes varies from site to site and page to page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Getting indexed in Google Scholar
Hi all! We have a client who publishes scholarly research as a highly regarded non-profit. Their Publications aren't being indexed in Google Scholar 50% of the time and when they are, Google is pulling random stuff from a PDF vs. from the html page. Any advice on best practices is enormously appreciated
SERP Trends | | SimpleSearch1 -
What do we know about the "Shops" SERP Feature?
I came across this SERP Feature in a search today on a mobile device. It does not show for the same search query on desktop. What do we know about this "Shops" SERP feature? shops-blur.jpg
SERP Trends | | seoelevated0 -
My website is being Cached with non-www and With WWW it is not indexed and cached
Hello Team, I had one question that my website is being indexed and cached with Non-www but with WWW it is not caching it is showing 404 error. Even each and every redirection is proper. Still it is showing an error. Can you please tell me what issue i had with my site?? Here is my links: https://webcache.googleusercontent.com/search?q=cache:nCH1DvhuQT8J:https://www.canvaschamp.com/+&cd=1&hl=en&ct=clnk&gl=usa
SERP Trends | | CommercePundit0 -
Google Fetch and Render - Partial result (resources temporarily unavailable)
Over the past few weeks, my website pages have been showing as partial in the Google Search Console. There are many resources/ files (js, css, images) that are 'temporarily unreachable'. The website files haven't had any structural changes for about 2 years (it historically has always shows as 'completed' and rendered absolutely fine in the search console). I have checked and the robots.txt is fine as is the sitemap. My host hasn't been very helpful, but has confirmed there are no server issues. My website rankings have now dropped which I think is due to these resources issues and I need to clear this issue up asap - can any one here offer any assistance? It would be hugely appreciated. Thanks, Dan
SERP Trends | | dan_550 -
Why rich snippets have disappeared in Google search results ?
Hello, Few weeks ago, we have implemented a snippets strategy in order to increase our ranking for our blog posts. That was successful and our results were showing up in Google. But today, every single snippet has disappeared. We go back to a simple search result, without snippets for us or for our competitors. It seems that Google has delete rich snippet for specific keywords because for the generic keywords (for exemple "inbound marketing definition" in our case), there is still a snippet result. Do you know if Google has changed snippets parameters for keywords with low search volume ? Thank you !
SERP Trends | | Laure-Nile0 -
What are the SEO challenges associated with private search engines, like DuckDuckGo?
I read recently that DuckDuckGo doubled in size in 2017. With their search engine, and other alternatives to Google, taking part of the search market away, how can SEO/Marketing/Web pros keep their websites optimized and get traffic from these private search engines? (Also, do any of you have experience with this? What portion of your search traffic is coming from private search engines?)
SERP Trends | | searchencrypt1 -
Sitelink (meta) descriptions in Google SERP
Hi, I am probably not the only one having this question regarding the quality of sitelinks in organic results. When your search returns sitelinks, I know the only thing you can do to avoid certain sitelinks to appear is to demote it for that search result. But what can you do about the description. For the main result you are pretty much able to create an appealing description, but when the sitelinks appear, all sitelinks display these crappy 'composed' descriptions. I've read Google did some tests a couple of years ago with multiple descriptions and multiple titles, but this doesn't cover the issue I just described. Is there a way to create 'sitelink descriptions'? Thanks in advance for your thoughts.
SERP Trends | | ConclusionDigital0 -
Local SEO citations: Do business description text variations matter? If yes how important is it to vary them?
I would like to see what is the consensus here about this stuff as virtually any automated service, be it yext or yahoo, will use one description text and use it for every available listing in their ecosystem. In general what is your take on varying this business description text? of course, I would personally put a safer bet on avoiding any duplicate text between listings and the domain of the business in any case. but my questions is more between listing vs listing. THIS IS BAD - WHY? THIS HAS NO IMPORTANCE WHATSOEVER - WHY NOT? and in conclusion and hindsight, would one need to watch out for duplicate content across non-domain assets used by the business so long as none of the content is duplicated from the business domain? I tried my best googling this, but did not find a straight answer anywhere. I would really appreciate some experienced and insightful comments on this one 🙂
SERP Trends | | Raydon0