Page not being indexed or crawled and no idea why!
-
Hi everyone,
There are a few pages on our website that aren't being indexed right now on Google and I'm not quite sure why. A little background:
We are an IT training and management training company and we have locations/classrooms around the US. To better our search rankings and overall visibility, we made some changes to the on page content, URL structure, etc. Let's take our Washington DC location for example. The old address was:
http://www2.learningtree.com/htfu/location.aspx?id=uswd44
And the new one is:
http://www2.learningtree.com/htfu/uswd44/reston/it-and-management-training
All of the SEO changes aren't live yet, so just bear with me. My question really regards why the first URL is still being indexed and crawled and showing fine in the search results and the second one (which we want to show) is not. Changes have been live for around a month now - plenty of time to at least be indexed.
In fact, we don't want the first URL to be showing anymore, we'd like the second URL type to be showing across the board. Also, when I type into Google site:http://www2.learningtree.com/htfu/uswd44/reston/it-and-management-training I'm getting a message that Google can't read the page because of the robots.txt file. But, we have no robots.txt file. I've been told by our web guys that the two pages are exactly the same. I was also told that we've put in an order to have all those old links 301 redirected to the new ones. But still, I'm perplexed as to why these pages are not being indexed or crawled - even manually submitted it into Webmaster tools.
So, why is Google still recognizing the old URLs and why are they still showing in the index/search results?
And, why is Google saying "A description for this result is not available because of this site's robots.txt"
Thanks in advance!
- Pedram
-
Hi Mike,
Thanks for the reply. I'm out of the country right now, so reply might be somewhat slow.
Yes, we have links to the pages on our sitemaps and I have done fetch requests. I did a check now and it seems that the niched "New York" page is being crawled now. Might have been a time issue as you suggested. But, our DC page still isn't being crawled. I'll check up on it periodically and see the progress. I really appreciate your suggestions - it's already helping. Thank you!
-
It possibly just hasn't been long enough for the spiders to re-crawl everything yet. Have you done a fetch request in Webmaster Tools for the page and/or site to see if you can jumpstart things a little? Its also possible that the spiders haven't found a path to it yet. Do you have enough (or any) pages linking into that second page that isn't being indexed yet?
-
Hi Mike,
As a follow up, I forwarded your suggestions to our Webmasters. The adjusted the robots.txt and now reads this, which I think still might cause issues and am not 100% sure why this is:
User-agent: * Allow: /htfu/ Disallow: /htfu/app_data/ Disallow: /htfu/bin/ Disallow: /htfu/PrecompiledApp.config Disallow: /htfu/web.config Disallow: / Now, this page is being indexed: http://www2.learningtree.com/htfu/uswd74/alexandria/it-and-management-training But, a more niched page still isn't being indexed: http://www2.learningtree.com/htfu/usny27/new-york/sharepoint-training Suggestions?
-
The pages in question don't have any Meta Robots Tags on them. So once the Disallow in Robots.txt is gone and you do a fetch request in Webmaster Tools, the page should get crawled and indexed fine. If you don't have a Meta Robots Tag, the spiders consider it Index,Follow. Personally I prefer to include the index, follow tag anyway even if it isn't 100% necessary.
-
Thanks, Mike. That was incredibly helpful. See, I did click the link on the SERP when I did the "site" search on Google, but I was thinking it was a mistake. Are you able to see the disallow robot on the source code?
-
Your Robots.txt (which can be found at http://www2.learningtree.com/robots.txt) does in fact have Disallow: /htfu/ which would be blocking http://www2.learningtree.com**/htfu/**uswd44/reston/it-and-management-training from being crawled. While your old page is also technically blocked, it has been around longer and would already have been cached so will still appear in the SERPs.... the bots just won't be able to see changes made to it because they can't crawl it.
You need to fix the disallow so the bots can crawl your site correctly and you should 301 your old page to the new one.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Script must not be placed outside HTML tag? If not, how Google treats the page?
Hi, We have recently received the "deceptive content" warning from Google about some of our website pages. We couldn't able to find the exact reason behind this. However, we placed some script outside the HTML tag in some pages (Not in the same pages with the above warning). We wonder whether this caused an issue to Google to flag our pages. Please help. Thanks
White Hat / Black Hat SEO | | vtmoz0 -
Third part http links on the page source: Social engineering content warning from Google
Hi, We have received "Social engineering content" warning from Google and one of our important page and it's internal pages have been flagged as "Deceptive site ahead". We wonder what's the reason behind this as Google didn't point exactly to the specific part of the page which made us look so to the Google. We don't employ any such content on the page and the content is same for many months. As our site is WP hosted, we used a WordPress plugin for this page's layout which injected 2 http (non-https) links in our page code. We suspect if this is the reason behind this? Any ideas? Thanks
White Hat / Black Hat SEO | | vtmoz1 -
Do we get de-indexed for changing some content and tags frequently? What is the scope in 2017?
Hi all, We are making some changes in our website content at some paragraphs and tags with our main keywords. I'm just wondering if this is going to make us de indexed from Google? Because we recently dropped in rankings when we added some new content; so I am worried whether there are any chances it will turn more risky when we try to make anymore changes like changing the content. There are actually many reasons a website gets de indexed from Google but we don't employ any such black hat techniques. Our website got a reputation with thousands of direct traffic and organic search. However I am curious to know what are the chances of getting de indexed as per the new trends at Google? Thanks
White Hat / Black Hat SEO | | vtmoz0 -
My indexed site URL removed from google search without get any message or Manual Actions???
On Agust 2 or 3.. I'm not sure about the exact date...
White Hat / Black Hat SEO | | newwaves
The main URL of my website https://new-waves.net/ had been completely removed from Google search results! without getting any messages or Manual Actions on search console ?? but I'm still can find some of my site subpages in search results and on Google local maps results when I tried to check it on google
info:new-waves.net >> no results
site:new-waves.net >> only now I can see the main URL in results because I had submitted it again and again to google but it might be deleted again today or tomorrow as that happen before last few days
100% of all ranked keywords >> my site URL new-waves.net had been completely removed from all results! but I'm still can see it on maps on some results I never get any penalties to my site on Google search console. I noticed some drops on some keywords before that happens (in June and July) but it all of it was related to web design keywords for local Qatar, but all other keywords that related to SEO and digital marketing were not have any changes and been on top My site was ranked number 1 on google search results for "digital marketing qatar" and some other keywords, but the main URL had been removed from 100% of all search results. but you can still see it on the map only. I just tried to submit it again to Google and to index it through google search console tool but still not get any results, Recently, based on google console, I found some new links but I have no idea how it been added to links of my website:
essay-writing-hub.com - 9,710
tiverton-market.co.uk - 252
facianohaircare.com - 48
prothemes.biz - 44
worldone.pw - 2
slashdot.org - 1
onwebmarketing.com - 1 the problem is that all my high PR real links deleted from google console as well although it still have my site link and it could be recognized by MOZ and other sites! Can any one help to know what is the reason?? and how can I solve this issue without losing my previous ranked keywords? Can I submit a direct message to google support or customer service to know the reason or get help on this issue? Thanks & Regards0 -
A Sitemap Web page & A Sitemap in htaccess - will a website be penalised for having both?
Hi I have a sitemap url already generated by SEO Yoast in the htaccess file, and I have submitted that to the search engines. I'd already created a sitemap web page on the website, also as a helpful aid for users to see a list of all page urls. Is this a problem and could this scenario create duplicate issues or any problems with search engines? Thanks.
White Hat / Black Hat SEO | | SEOguy10 -
Thin Content Pages: Adding more content really help?
Hello all, So I have a website that was hit hard by Panda back in 2012 November, and ever since the traffic continues to die week by week. The site doesnt have any major moz errors (aside from too many on page links). The site has about 2,700 articles and the text to html ratio is about 14.38%, so clearly we need more text in our articles and we need to relax a little on the number of pictures/links we add. We have increased the text to html ratio for all of our new articles that we put out, but I was wondering how beneficial it is to go back and add more text content to the 2,700 old articles that we have just sitting. Would this really be worth the time and investment? Could this help the drastic decline in traffic and maybe even help it grow?
White Hat / Black Hat SEO | | WebServiceConsulting.com0 -
Is widget linkbaiting a bad idea now that webmasters are getting warnings of unnatural links?
I was reading this article about how many websites are being deindexed because of an unnatural linking profile and it got me thinking about some widgets that I have created. In the example given, a site was totally deindexed and the author believes the reason was because of multiple footer links from themes that they created. I have one site that has a very popular widget that I offer to others to embed into their site. The embed code contains a line that says, "Tool provided by Site Name". Now, it just so happens that my site name contains my main keyword. So, if I have hundreds of websites using this tool and linking back to me using the same anchor text, could Google see this as unnatural and possibly deindex me? I have a few thoughts on what I should do but would love to hear your thoughts: 1. I could use a php script to provide one of several different anchor text options when giving my embed code. 2. I could change the embed code so that the anchor text is simply my domain name, ie www.mywebsitename.com rather than "my website name". Thoughts?
White Hat / Black Hat SEO | | MarieHaynes1 -
Indexing search results
One of our competitors indexes all searches performed by users on their site. They automatically create new pages/ new urls based on those search terms. Is it black hat technique? Do search engines specifically forbid this?
White Hat / Black Hat SEO | | AEM131