What is robots.txt file issue?
-
I hope you are well. Mostly moz send me a notification that your website can,t be crawled and it says me o check robots.txt file. Now the Question is how can solve this problem and what should I write in robots.txt file?
Here is my website. https://www.myqurantutor.com/
need your help brohers.... and Thanks in advance
-
Not sure. Your robots.txt file looks fine & shouldn't be blocking anything except for admin:
User-agent: * Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php sitemap: https://www.myqurantutor.com/sitemap_index.xml
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console issue: "This is how Googlebot saw the page" showing part of page being covered up
Hi everyone! Kind of a weird question here but I'll ask and see if anyone else has seen this: In Google Search Console when I do a fetch and render request for a specific site, the fetch and blocked resources all look A-OK. However, in the render, there's a large grey box (background of navigation) that covers up a significant amount of what is on the page. Attaching a screenshot. You can see the text start peeking out below (had to trim for confidentiality reasons). But behind that block of grey IS text. And text that apparently in the fetch part Googlebot does see and can crawl. My question: is this an issue? Should I be concerned about this visual look? Or no? Never have experienced an issue like that. I will say - trying to make a play at a featured snippet and can't seem to have Google display this page's information, despite it being the first result and the query showing a featured snippet of a result #4. I know that it isn't guaranteed for the #1 result but wonder if this has anything to do with why it isn't showing one. VmIqgFB.png
On-Page Optimization | | ChristianMKG0 -
Index or No Index (Panda Issue)
Hi, I believe our website has been penalized by the panda update. We have over 9000 pages and we are currently indexing around 4,000 of those pages. I believe that more than half of the pages indexes have either thin content. Should we stop indexing those pages until we have quality page content? That will leave us with very few pages being indexed by Google (Roughly 1,000 of our 9,000 pages have quality content). I am worried that we would hurt our organic traffic more by not indexing the pages than by indexing the pages for google to read. Any help would be greatly appreciated. Thanks, Jim Rodriguez
On-Page Optimization | | dustyabe0 -
Internal Linking : File Name or URI/filename.html
Hi, For crawlability, what would be the best way to do internal linking. /aboutus.html or www.xyz.com/aboutus.com Would this make any difference to the website in terms of SEO or bot crawling the website? Regards, Sree
On-Page Optimization | | jungleegames0 -
Im seeing a Dot after the / on a new project, never seen this before, any issues using this format ?
Hi Ive got a new project and seeing a dot after the forward slash something ive never seen before what does it mean ? Are there any seo issues regarding it, is it bad practice or fine to proceed using that format ? Example below; www.domain.co.uk/**.**cool-new-product Thanks Dan
On-Page Optimization | | Dan-Lawrence0 -
Best practice for Meta-Robots tag in categories and author pages?
For some of our site we use Wordpress, which we really like working with. The question I have is for the categories and authors pages (and similiar pages), i.e. the one looking: http://www.domain.com/authors/. Should you or should you not use follow, noindex for meta-robots? We have a lot of categories/tags/authors which generates a lot of pages. I'm a bit worried that google won't like this and leaning towards adding the follow, noindex. But the more I read about it, the more I see people disagree. What does the community of Seomoz think?
On-Page Optimization | | Lobtec0 -
How to Resolve Google Crawling Issues for My eCommerce Website?
I want to resolve Google crawling issues for my eCommerce website. My website is as follow. http://www.vistastores.com/ Google have crawled only 97 webpages from my website. My website is quite old. (~More than 6 months) But, Google have indexed only 97 webpages. I have created one campaign over SEOmoz tool and found some errors over there. So, I just assumed that due to it Google did not crawled my website. But, I have created one another campaign for my competitor website to know actual status and reason behind it. I found that, my competitor website have more error compare to me but, Google have crawled maximum pages compare to me. So, What is reason behind it? How can I improve my crawling rate and index maximum webpages to Google? [6133009604_af85d29730_b.jpg](img src=) 6133009604_af85d29730_b.jpg 6133009604_af85d29730_b.jpg 6139706697_4e252fdb82_b.jpg
On-Page Optimization | | CommercePundit0 -
Robots.txt: excluding URL
Hi, spiders crawl some dynamic urls in my website (example: http://www.keihome.it/elettrodomestici/cappe/cappa-vision-con-tv-falmec/714/ + http://www.keihome.it/elettrodomestici/cappe/cappa-vision-con-tv-falmec/714/open=true) as different pages, resulting duplicate content of course. What is syntax for disallow these kind of urls in robots.txt? Thanks so much
On-Page Optimization | | anakyn0 -
Will a "no follow" "no index" meta tag resolve duplicate content issue?
I have a duplicate content issue. If the page has already been indexed will a no follow no index tag resolve the issue or do I also need a rel canonical statement?
On-Page Optimization | | McKeeMarketing0