Issue with Robots.txt file blocking meta description
-
Hi,
Can you please tell me why the following error is showing up in the serps for a website that was just re-launched 7 days ago with new pages (301 redirects are built in)?
A description for this result is not available because of this site's robots.txt – learn more.
Once we noticed it yesterday, we made some changed to the file and removed the amount of items in the disallow list.
Here is the current Robots.txt file:
# XML Sitemap & Google News Feeds version 4.2 - http://status301.net/wordpress-plugins/xml-sitemap-feed/ Sitemap: http://www.website.com/sitemap.xml Sitemap: http://www.website.com/sitemap-news.xml User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Other notes... the site was developed in WordPress and uses that followign plugins:
- WooCommerce All-in-One SEO Pack
- Google Analytics for WordPress
- XML Sitemap
- Google News Feeds
Currently, in the SERPs, it keeps jumping back and forth between showing the meta description for the www domain and showing the error message (above).
Originally, WP Super Cache was installed and has since been deactivated, removed from WP-config.php and deleted permanently.
One other thing to note, we noticed yesterday that there was an old xml sitemap still on file, which we have since removed and resubmitted a new one via WMT. Also, the old pages are still showing up in the SERPs.
Could it just be that this will take time, to review the new sitemap and re-index the new site?
If so, what kind of timeframes are you seeing these days for the new pages to show up in SERPs? Days, weeks? Thanks, Erin ```
-
At the moment, it doesn't seem that rel=publisher is doing all that much for sites (aside from sometimes showing better info ion the knowledge graph listing on Brand searches) but personally I believe it's functionality and influence are going to be greatly expanded fairly soon, so well worth doing. As far as it contributing anything to help speed up indexing... doubt it.
P.
-
Paul,
Thanks... you hit upon my hunch, that we will just have to wait.
Much of the information in the SERPs (metadescriptions, titles and urls) are still old,even though they redirect to the new pages when I click.
Thanks for the tip... and about social media.
Do you think it will help to get the rel=publisher link to the Google+ page on the site?
Erin
-
A lot of people, especially WP users use modules that may block certain spiders crawling your site, but in your case, you don't seem to have any.
-
If you just changed the robots.txt file yesterday, my guess is you're going to have to be patient while the site gets recrawled, Erin. Any of the pages that are in the index and were cached before yesterday's robots update will still include the directive not to include the metadescription (since that's the condition they were under when they were cached.)
I suspect the pages you're seeing with metadescriptions were crawled since the robots update. Are you seeing the same page change whether it shows metadescription or not?
As far as old pages showing in the SERPs, again they'll all have to be crawled before the 301 redirects can be discovered and the SEs can begin to understand they should be dropped. (Even then it can take days to weeks for the originals to drop out.)
Another very effective way to help get the new site indexed faster is to attract some good-quality new links to the new pages. Social Media can be especially effective for this, Google+ in particular.
Paul
-
Thanks!
What do I need to look for in the .htaccess file?
Here is what is there... and the rest (not shown) are redirects:
BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On RewriteBase / RewriteRule ^index.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L]</ifmodule> # END WordPress
BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On RewriteBase / RewriteRule ^index.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L]</ifmodule> # END WordPress
-
Thanks for the tips! Let me check it out.
-
I'd also insure its not something to do with your .htacess file.
-
Make sure the pages aren't blocked with meta robots noindex tag
Fetch as Google in WMT to request a full site recrawl.
Run brokenlinkcheck.com and see if their crawler is successfully crawling or if it's blocked.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Meta Description used in webpage content? impact of this technique.
I just do competitor Analysis, one of my competitor follow technique like Use Meta Title as H1 tag on each page Use Meta Description on Content section ( Same as meta description ). Please suggest me is this technique work or have any benefits of this kind of thing in ranking or visibility.
Intermediate & Advanced SEO | | HuptechWebseo0 -
SEO Best Practices regarding Robots.txt disallow
I cannot find hard and fast direction about the following issue: It looks like the Robots.txt file on my server has been set up to disallow "account" and "search" pages within my site, so I am receiving warnings from the Google Search console that URLs are being blocked by Robots.txt. (Disallow: /Account/ and Disallow: /?search=). Do you recommend unblocking these URLs? I'm getting a warning that over 18,000 Urls are blocked by robots.txt. ("Sitemap contains urls which are blocked by robots.txt"). Seems that I wouldn't want that many urls blocked. ? Thank you!!
Intermediate & Advanced SEO | | jamiegriz0 -
Meta refresh bad for SEO
Hi there, Some external developers have created a wishlist for a website that allows visitors to add products to a wishlist and then send an enquiry. Very similar set-up to a shopping basket really (without the payment option). However, this wishlist lives in a separate iframe and refreshes every 30 seconds to reflect any items visitors add to their wishlist. This refreshing is done with a meta refresh. I'm aware of the obvious usability issue that the visitor's product only appears after 30 seconds in their wishlist. However, are there also any SEO issues due to the refreshing of the iframe every 30 seconds? Please let me know, whether small or large issues.
Intermediate & Advanced SEO | | Robbern0 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
Crawl Issue for Deleted Pages
Hi, sometimes, I just delete a page and not necessarily want to make a 404 to another page. So Google Webmaster Tools shows me 108 'not found' pages under 'Crawling Errors'. Is that a problem for my site?
Intermediate & Advanced SEO | | soralsokal
Can I ignore this with good conscience?
Shall I make 404 to my homepage? I am confused and would like to hear your opinion on this. Best, Robin0 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
Robots.txt
What would be a perfect robots.txt file my site is propdental.es Can i just place: User-agent: * Or should i write something more???
Intermediate & Advanced SEO | | maestrosonrisas0 -
WP File Permissions
After suffering a malware episode I wonder if there is an optimum setting for the file permissions for a typical Wordpress site? Colin
Intermediate & Advanced SEO | | NileCruises0