No descripton on Google/Yahoo/Bing, updated robots.txt - what is the turnaround time or next step for visible results?
-
Hello,
New to the MOZ community and thrilled to be learning alongside all of you! One of our clients' sites is currently showing a 'blocked' meta description due to an old robots.txt file (eg: A description for this result is not available because of this site's robots.txt)
We have updated the site's robots.txt to allow all bots. The meta tag has also been updated in WordPress (via the SEO Yoast plugin)
See image here of Google listing and site URL: http://imgur.com/46wajJw
I have also ensured that the most recent robots.txt has been submitted via Google Webmaster Tools.
When can we expect these results to update? Is there a step I may have overlooked?
Thank you,
Adam -
Great, the good news is following submission of a sitemap via Webmaster Tools, things appear to be remedied on Google! It does seem, however, that the issue still persists on Bing/Yahoo.
Some of the 404's are links from an old site that weren't carried over following my redesign; so that will be handled shortly as well.
I've submitted the sitemap via Bing Webmaster Tools, as such I presume it's a similar matter of simply 'waiting on Bing'?
Many thanks for your valuable insight!
-
Hi There
It seems like there are some other issues tangled up in this.
- First off it looks like some non-www URLs indexed in Google are 301 redirecting to www but then 404'ing. It's good they redirect to www, but they should end up on active pages.
- The NON-www homepage is the one showing the robots.txt message. This should hopefully resolve in a week or two when Google re-crawled the NON-www URL, sees the 301 - the actual solution is getting the non-www URL out of the index, and having them rank the www homepage instead. The www homepage description shows up just fine.
- You may want to register the non-www version of the domain in webmaster tools, and make sure to clean up any errors that pop up there as well.
-
I just got this figured out, let's try dropping this into Google!
-
The 404 error could be around a common error experienced with Yoast sitemaps: http://kb.yoast.com/article/77-my-sitemap-index-is-giving-a-404-error-what-should-i-do
1st step is to try and reset the permalink structure, it could resolve the 404 error you're seeing. You definitely want to resolve your sitemap 404 error to submit a crawlable sitemap to Google.
-
Thanks! It would seem that the Sitemap URL http://www.altaspartners.com/sitemap_index.xml brings up a 404 page, so I'm a bit confused with that step - but otherwise it appears to be very clear!
-
In WordPress, go to the Yoast plugin and locate the sitemap URL / settings. Plug the sitemap URL into your browser and make sure that it renders properly.
Once you have that exact URL, drop it into Google Webmaster Tools and let it process. Google will let you know if they found any errors that need correcting. Once submitted, you just need to wait for Google to update its index and reflect your site's meta description.
Yoast has a great blog that goes in depth about its sitemap features: https://yoast.com/xml-sitemap-in-the-wordpress-seo-plugin/
-
Sounds great Ray, how would I go about checking these URLs for the Yoast siteap?
-
Yoast sets up a pretty efficient sitemap. Make sure the sitemap URL settings are correct, load it up in the browser to confirm, and submit your sitemap through GWT - that will help get a new crawl of the site and hopefully an update to their index so your meta descriptions begins to show in the SERPs.
-
Hi Ray,
With fetch as Googlebot, I see a redirection for the non-www, and a correct fetch for the www.Using SEO Yoast, it would seem the sitemap link leads to a 404?
-
Ha, that's exactly what I did.
I'm not showing any restrictions in your robots.txt file and the meta tag is assigned appropriately.
Have you tried to fetch the site with the Webmaster Tools 'fetch as googlebot' tool? If there is an issue, it should be apparent there. Doing this may also help get your page re-crawled more quickly and the index updated.
If everything is as it should be and you're only waiting on a re-index, that usually takes no longer than two weeks (for very infrequently indexed websites). Fetching with the Google bot may speed things up and getting an external link on a higher trafficked page could help as well.
Have you tried resubmitting a sitemap through GWT as well? That could be another trick to getting the page re-crawled more quickly.
-
Hello Ray,
Specifically, the firm name, which is spelled a-l-t-a-s p-a-r-t-n-e-r-s (it is easy to confuse with "Atlas Partners" which is another company altogether
-
What was the exact search term you used to bring up those SERPs?
When i search 'atlastpartners' and 'atlastpartners.com' it brings up your site with a meta description.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl solutions for landing pages that don't contain a robots.txt file?
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
Technical SEO | | Nomader1 -
Will redirecting a logged in user from a public page to an equivalent private page (not visible to google) impact SEO?
Hi, We have public pages that can obviously be visited by our registered members. When they visit these public pages + they are logged in to our site, we want to redirect them to the equivalent (richer) page on the private site e.g. a logged in user visiting /public/contentA will be redirected to /private/contentA Note: Our /public pages are indexed by Google whereas /private pages are excluded. a) will this affect our SEO? b) if not, is 302 the best http status code to use? Cheers
Technical SEO | | bernienabo0 -
Google's Omitted Results - Attempt to De-Index
We're trying to get webpages from our QA site out of Google's index. We've inserted the NOINDEX tags. Google now shows only 3 results (down from 196,000), however, they offer a link to "show omitted results" at the bottom of the page. (A) Did we do something wrong? or (B) were we successful with our NOINDEX but Google will offer to show omitted results anyway? Please advise! Thanks!
Technical SEO | | BVREID0 -
How to remove my cdn sub domins on Google search result?
A few months ago I moved all my Wordpress images into a sub domain. After I purchased CDN service, I again moved that images to my root domain. I added User-agent: * Disallow: / to my CDN domain. But now, when I perform site search on the Google, I found that my CDN sub domains are indexed by the Google. I think this will make duplicate content issue. I already hit by the Panguin. How do I remove these search results on Google? Should I add my cdn domain to webmaster tools to request URL removal request? Problem is, If I use cdn.mydomain.com it shows my www.mydomain.com. My blog:- http://goo.gl/58Utt site search result:- http://goo.gl/ElNwc
Technical SEO | | Godad1 -
"Extremely high number of URLs" warning for robots.txt blocked pages
I have a section of my site that is exclusively for tracking redirects for paid ads. All URLs under this path do a 302 redirect through our ad tracking system: http://www.mysite.com/trackingredirect/blue-widgets?ad_id=1234567 --302--> http://www.mysite.com/blue-widgets This path of the site is blocked by our robots.txt, and none of the pages show up for a site: search. User-agent: * Disallow: /trackingredirect However, I keep receiving messages in Google Webmaster Tools about an "extremely high number of URLs", and the URLs listed are in my redirect directory, which is ostensibly not indexed. If not by robots.txt, how can I keep Googlebot from wasting crawl time on these millions of /trackingredirect/ links?
Technical SEO | | EhrenReilly0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
Duplicate Content Issue: Google/Moz Crawler recognize Chinese?
Hi! I am using Wordpress multisite and my Chinese version of the website is in www.mysite.com/cn Problem: I keep getting duplicate content errors within www.mysite.com/cn (NOT between www.mysite.com and www.mysite.com/cn) I have downloaded and checked the SEOmoz report and duplicate_page_content list in CSV file. I have no idea why it says they have the same content., they have nothing in common in content . www.mysite.com is the English version of the website,and the structure is the same for www.mysite.com/cn *I don't have any duplicate content issues within www.mysite.com Question: Does google Crawler properly recognizes chinese content??
Technical SEO | | joony20080 -
Robots.txt and 301
Hi Mozzers, Can you answer something for me please. I have a client and they have 301 re-directed the homepage '/' to '/home.aspx'. Therefore all or most of the linkjuice is being passed which is great. They have also marked the '/' as nofollow / noindex in the Robots.txt file so its not being crawled. My question is if the '/' is being denied access to the robots is it still passing on the authority for the links that go into this page? It is a 301 and not 302 so it would work under normal circumstances but as the page is not being crawled do I need to change the Robots.txt to crawl the '/'? Thanks Bush
Technical SEO | | Bush_JSM0