Robots.txt file issue.
-
Hi,
Its my third thread here and i have created many like it on many webmaster communities.I know many pro are here so badly needs help.
Robots.txt blocked 2k important URL's of my blogging site
http://Muslim-academy.com/ Especially of my blog area which are bringing good number of visitors daily.My organic traffic declined from 1k daily to 350.
I have removed the robots.txt file.Resubmitted existing Sitemap.Used all Fetch to index options and 50 URL submission option in Bing Webmaster Tool.
What Can I do know to have these blocked URL's back in Google index?
1.Create a NEW sitemap and submit it again in Google webmaster and bing webmaster tool?
2.Bookmark,linkbuilding or share the URL's.I did a lot of bookmarking for blocked URL's.
I fetch the list of blocked URLS Using BING WEBMASTER TOOLS.
-
Robert some good signs of life.New sitemap shows 5080 pages submitted and 4817 indexed.
These remaining pages are surely blocked ones?RightRobert though there is some improvement in Impressions and Clicks.Thanks a lot for staying that long with me solving this issue.
-
Christopher,
Have you looked at indexing in GWMT to see if they have indexed, how many pages, etc.?
-
Got your point but I Resubmit and its status is still pending.
I have test it and it was working but when I submit it 2 days ago up till now its status is pending. -
No, when you resubmit or submit a "new" sitemap, it just tells Google this is the sitemap now. There is no content issue with a sitemap.
Best,Robert
-
Just one last question Robert.Does not the duplicate sitemap creates duplicate pages in searches?
Sorry my question may looks like Crazy to you but at the moment with applying every possible fix I do not mess up and make things even more worse.
-
Given the only issue was the robots.txt error, I would resubmit. I do think it would not hurt to generate a sitemap and submit that in case there may be something you are missing though.
Best
-
Robert the question is either I need to create a new sitemap or resubmit the existing one?
-
Hello Christopher
It appears you have done a good deal to remediate the situation already. I would resubmit a sitemap to Google also. Have you looked in WMT to see what is now indexed? I would look at the graph of indexed and robots.txt and see if you are moving the needle upward again.
This begs a second question of "How did it happen?" You stated, "Robots.txt blocked 2k important URL's of my blogging site" and that sounds like it just occurred out of the ether. I would want to know that I had found the reason and make sure I have a way to keep it from happening going forward. (just a suggestion).Lastly, using the Index Status in WMT should be a great way to learn how effective what you tried in fixing it is. I like knowing that type of data and storing it somewhere retrievable for the future.
Best to you,
Robert
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking Standard pages with Robots.txt (t&c's, shipping policy, pricing & privacy policies etc)
Hi I've just had best practice site migration completed for my old e-commerce store into a Shopify environment and I see in GSC that it's reporting my standard pages as blocked by robots.txt, such as these below examples. Surely I don't want these blocked ? is that likely due to my migrators or s defaults setting with Shopify does anyone know? : t&c's shipping policy pricing policy privacy policy etc So in summary: Shall I unblock these? What caused it Shopify default settings or more likely my migration team? All Best Dan
Reporting & Analytics | | Dan-Lawrence0 -
Drupal SEO Issues
Hi, I have two questions regarding my enterprise website. It is built on the Drupal CMS. First, and in looking at Google Analytics, I'm seeing more than 6k pages listed, but over 5k have received less than 10 page views in six months. In fact, most of them are not really content pages at all. The URLs I'm seeing listed, which to me indicates actual crawlable content in GA, shows pages like this: http://www.domainname.com/node/2153
Reporting & Analytics | | jaccardi62
http://www.domainname.com/company/careers?gnk=apply&gni=8a87142e4d086a73014d2a0d65242b8e&gns=glassdoor+free
http://www.domainname.com/blog?page=1
http://www.domainname.com/resources/videos?field_video_category_value=all&page=4&page=1
http://www.domainname.com/search/site?search_api_views_fulltext=talent+pool What is the problem here? Why are these non-pages being indexed as content and why are they showing up in GA? Second question is about my blog and blog best practices. While I know blog content is important for SEO, why is my site blog pagination being indexed as content. For example, these "pages" are showing up in SERPs: http://www.domainname.com/blog/tag/business_intelligence?page=2
http://www.domainname.com/blog/topic/expansion?page=5
http://www.domainname.com/blog/weeks_news_april_26 What is the best way to fix this? Thanks in advance for your help!0 -
Internal Referral Traffic Issue due to https/http?
Hi Mozzers, we´re running a secured https account section on our website including a messaging center where lots of non secured own URLs are being shared among the users. Is there a possibility that a user clicking on one of the shared URLs within the https section triggering another session thats been counted as direct traffic? Thanks for your help! Greets
Reporting & Analytics | | LocalIM
Manson0 -
Longevity of robot.txt files on Google rankings
This may be a difficult question to answer without a ton more information, but I'm curious if there's any general thought that could shed some light on the following scenario I've recently heard about and wish to be able to offer some sound advice: An extremely reputable non-profit site with excellent ranking had gone through a re-design and change-over into WordPress. A robots.txt file was used during development on the dev site on the dev server. Two months later it was noticed through GA that traffic was way down to the site. It was then discovered that the robot.txt file hadn't been removed and the new site (same content, same nav) went live with it in place. It was removed and a site index forced. How long might it take for the site to re-appear and regain past standing in the SERPs if rankings have been damaged. What would the expected recovery time be?
Reporting & Analytics | | gfiedel0 -
Google Webmaster indicates robots.text access error
Seems that Google has not been crawling due to an access issue with our robots.txt
Reporting & Analytics | | jmueller0823
Late 2013 we migrated to a new host, WPEngine, so things might have changed, however this issue appears to be recent. A quick test shows I can access the file. This is the Google Webmaster Tool message: http://www.growth trac dot com/: Googlebot can't access your site January 17, 2014 Over the last 24 hours, Googlebot encountered 62 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 8.8% Note the above message says 'over the last 24 hours', however the date is Jan-17 This is the response from our host:
Thanks for contacting WP Engine support! I looked into the suggestions listed below and it doesn't appear that these scenarios are the cause of the errors. I looked into the server logs and I was only able to find 200 server responses on the /robots.txt. Secondly I made sure that the server wasn't over loaded. The last suggestion doesn't apply to your setup on WP Engine. We do not have any leads as to why the errors occurred. If you have any other questions or concerns, please feel free to reach out to us. Google is crawling the site-- should I be concerned? If so, is there a way to remedy this? By the way, our robots file is very lean, only a few lines, not a big deal. Thanks!0 -
How to get a list of robots.txt file
This is my site. http://muslim-academy.com/ Its in wordpress.I just want to know is there any way I can get the list of blocked URL by Robots.txt In Google Webmaster its not showing up.Just giving the number of blocked URL's. Any plugin or Software to extract the list of blocked URL's.
Reporting & Analytics | | csfarnsworth0 -
Having issues with pivot tables in GA
For the past three days, I've been trying to use pivot tables on a couple of reports, most importantly I'm trying to use a pivot table on the All Pages report. I'd like to pull it for a year so I can see what our traffic sources are for top pages. I keep getting an error in Google Analytics that says, "Resource is Not Available. Please Try Again Later." See attached image. I've condensed my date range to a month, just in case a year's data was too much and I still get the error. I'm unable to use pivot tables in ANY report, without getting this message. Is anyone else having issues?EgZ25
Reporting & Analytics | | Aggie1 -
Woes in organic ranking... Post Penguin issue?
Hello all, First post here. Site:http://www.symbolphoto.com I'd done a ton of work to get my SEO where it needs to be, however, after looking at my traffic post April 15-22nd, it's gone completely downhill. I don't rank organically, which i'd like for one single term 'b*ston wedding photographer'. What am i missing? I'm assuming Penguin may be a part of this, but i haven't participated in any link schemes... am i being penalized and if so, is it evident why? Help is much appreciated! replace * with o.
Reporting & Analytics | | symbolphoto0