Indexed, though blocked by robots.txt: Need to bother?
-
Hi,
We have intentionally blocked some of the website files which were indexed for years. Now we receive a message "Indexed, though blocked by robots.txt" in GSC. We can ignore as per my knowledge? Are any actions required about this? We thought of blocking them with meta tags but these are PDF files.
Thanks
-
Hi there!
What Google is telling you is that you are indexing URLs that you probably are not wanting to be indexed, or the other way around, that important pages are being blocked but indexed for other reasons.
If I might ask, why did you blocked through robots.txt those files?
There most 2 answers are:
1- Wanted to remove those from search results. If this is your case, you've solved only a part of the problem. What you should have done is (previously allowing robots to crawl those urls) apply noindex rules (keep in mind that can be set up in the HTTP header, as long as not html files cant have meta robots tag), then after a sufficient time block them in robots.txt.
_2- Optimize how GoogleBot (crawiling) time. _Being this case, then you've done it correctly and there is nothing to worry.Hope this help.
Best luck.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My site is showing indexed in search console but not appearing in Serps
hi, i have recently made sites.google site and submitted to search console but when I copy paste in google , its not appearing
Algorithm Updates | | alan-shultis0 -
Non-indexed or indexed top hierarchy pages get high PageRank at Google?
Hi, We are creating some pages just to capture leads from blog-posts. We created few pages at top hierarchy like website.com/new-page/. I'm just wondering if these pages will take away more PageRank. Do we need to create these pages at low hierarchy like website.com/folder/new-page to avoid passing more PageRank? Is this is how PR distributed even now and it's same for indexed or non-indexed pages? Thanks
Algorithm Updates | | vtmoz0 -
Can site blocked for US visitors rank well internationally?
Because of regulatory reasons, a stock trading site needs to be blocked to United States visitors Since most of google datacenters seem to be located in the US, can this site rank well in the other countries where does business despite being blocked in the US? Do U.S. Google data centers influence only US rankings?
Algorithm Updates | | tabwebman0 -
Google has indexed some of our old posts. What took so long and will we lose rank for their brevity?
Hi, We just had a few of our old blog posts indexed by Google. There are short formed posts, and I want to make sure we're not going to get dinged by Google for their length. Can you advise?https://www.policygenius.com/blog/guaranteed-issue
Algorithm Updates | | francoisdelame0 -
Bing not indexing pages
We have taken all recommended steps to index our site sitegeek.com pages to Bing Bot but failed to index them. Bing bot crawled more than 5,000 pages every day but strange why pages are not getting index ? if we query site:sitegeek.com in Bing Bing Search Engine shows only 1,200 pages got indexed. but we query site:sitegeek.com in Google Google Search Engine show more 546,000 pages got indexed. For example : https://www.sitegeek.com/000webhost Above page crawled by Google but Bing. Can anyone suggest what we are missing on this page? what need to change to index such pages? Thanks! Rajiv
Algorithm Updates | | gamesecure0 -
Is it stil a rule that Google will only index pages up to three tiers deep? Or has this changed?
I haven't looked into this in a while, it used to be that you didn't want to bury pages beyond three clicks from the main page. What is the rule now in order to have deep pages indexed?
Algorithm Updates | | seoessentials0 -
Any ideas why our category pages got de-indexed?
Hi all, I work for evenues, a directory website that provides listings of meeting rooms and event spaces. Things seemed to be chugging along nicely with our link building effort (mostly through guest blogging using a variety of anchor text). Woke up on Monday morning to find that our City pages have been de-indexed. This page: http://www.evenues.com/Meeting-Spaces/Seattle/Washington used to be at the top of page #2 in the SERPs for the keyword "Meeting Rooms in Seattle" I doubt that we got de-indexed because of our link building efforts, as it was only a few blog posts and links from profile pages on community websites. My guess is that when we did a recent 2.0 release of the site, there are now several "filters" or subcategory pages with latitude and longitude parameters in the URL + different page titles based on the categories like: "Meeting Rooms and Event Spaces in Seattle" --Main Page "Meeting Rooms in Seattle" "Classroom Venues in Seattle" "Party Venues in Seattle" There was a bit of pushback when I suggested that we do a rel="canonical" on these babies because ideally we'd like to rank for all 4 queries (Meeting Rooms, Party Venues, Classrooms, in City). These are new changes, and I have a sneaking suspicion this is why we got de-indexed. We're presenting generally the same content. Thoughts?
Algorithm Updates | | eVenuesSEO0 -
Is it hurting my seo ranking if robots.txt is forbidden?
robots.txt is forbidden - I have read up on what the robots.txt file does and how to configure it but what about if it is not able to be accessed at all?
Algorithm Updates | | Assembla0