Site not being Indexed that fast anymore, Is something wrong with this Robots.txt
-
My wordpress site's robots.txt used to be this:
User-agent: *
Disallow: Sitemap: http://www.domainame.com/sitemap.xml.gz I also have all in one SEO installed and other than posts, tags are also index,follow on my site.
My new posts used to appear on google in seconds after publishing. I changed the robots.txt to following and now post indexing takes hours.
Is there something wrong with this robots.txt? User-agent: *
Disallow: /cgi-bin
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /wp-login.php
Disallow: /wp-login.php
Disallow: /trackback
Disallow: /feed
Disallow: /comments
Disallow: /author
Disallow: /category
Disallow: */trackback
Disallow: */feed
Disallow: */comments
Disallow: /login/
Disallow: /wget/
Disallow: /httpd/
Disallow: /*.php$
Disallow: /?
Disallow: /*.js$
Disallow: /*.inc$
Disallow: /*.css$
Disallow: /*.gz$
Disallow: /*.wmv$
Disallow: /*.cgi$
Disallow: /*.xhtml$
Disallow: /?
Disallow: /*?Allow: /wp-content/uploads
User-agent: TechnoratiBot/8.1
Disallow:
ia_archiverUser-agent: ia_archiver
Disallow: /
disable duggmirror
User-agent: duggmirror
Disallow: /
allow google image bot to search all imagesUser-agent: Googlebot-Image
Disallow: /wp-includes/
Allow: /*
# allow adsense bot on entire siteUser-agent: Mediapartners-Google*
Disallow:
Allow: /*
-
I am not sure why you are setting disallow of file types. Google would not index wmv or js etc anyway as it cannot parse that type of file for data. If you want to coax google into indexing your site submit a sitemap in webmaster tools. You could also set NoFollow on the anchors for the pages you want to exclude and keep robots.txt cleaner by just including top level subdirectories such as admin etc. There just seems to be a lot of directories in there that do not relate to actual pages, and google is only concerned with renderable pages.
-
Hello,
Robots.txt, allow or disallow access to certain files or folders. He can not delay or slow down access. I do not think the problem is the robots.txt
Radu
-
Why don't you revert back to the original robots.txt and determine for certain that the problem is with this file?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt Syntax for Dynamic URLs
I want to Disallow certain dynamic pages in robots.txt and am unsure of the proper syntax. The pages I want to disallow all include the string ?Page= Which is the proper syntax?
Technical SEO | | btreloar
Disallow: ?Page=
Disallow: ?Page=*
Disallow: ?Page=
Or something else?0 -
Should a login page for a payroll / timekeeping comp[any be no follow for robots.txt?
I am managing a Timekeeping/Payroll company. My question is about the customer login page. Would this typically be nofollow for robots?
Technical SEO | | donsilvernail0 -
These days on Google results, it also shows the site map. I submitted my company's sitemap and it still does not show?What am I doing wrong?
Look at the image in the link. I want my company to look like the "pluralsight" website in Google. I want it to show the sitemap. I have already submitted the sitemap to Google few days back, what am I doing wrong? search?sourceid=chrome-psyapi2&ion=1&espv=2&ie=UTF-8&q=pluralsight&oq=pluralsight&aqs=chrome..69i57j0l5.11024j0j8
Technical SEO | | Deein0 -
Wrong canonical URL was specified. How to refresh the index now?
Wrong canonical URL was applied to thousands of pages of a client website, pointing them all to a single non-existing URL. Now Google has de-indexed most of those pages. We have fixed the problem now, but do we get Search engines crawl those pages again and start showing in Search results? I understand that a slow recovery is possible if we don't do anything. Was wondering if we can fast track the recovery... Any pointers? Thanks
Technical SEO | | Krupesh0 -
I must be doing something very wrong
Can I get some direct advice for a domain I am trying to optimize? The domain is mmfiles.com , it contains software submitted by users, 1500 different listings/pages with title/description, around 20 000 Google results for the domain, site is live since Dec 2008. Problem I am getting 20-40 hits from Google per day, it is pathetic. Best days were around June 2010, ~400 hits/day (if it matters). I am not sure what my problem is but with so much content and so little hits I must be doing something very wrong. Some possible problems and things I did: Google says I have 8 back links, that is not good but I know it's not all about links. SeoMoz says I have "too many on page links", can this be so important ? How should I redirect users landing on an url that moved? e.g software title can change, old location /12/photo-gallery/ is now /12/xml-photo-gallery/ , if user lands on old URL should I 301 redirect to new one? Because I can tell it's intention by the listing number. I used to 301 redirect, now I just display same content on any url string like /12/whatever/ I put rel="nofollow" on some internal pages like contact page, login, register, etc hoping to prevent diluting the page rank. If someone can have a look at the site and mention the most obvious seo problems it would be great.
Technical SEO | | adrianTNT0 -
I noticed all my SEOed sites are getting attacked constantly by viruses. I do wordpress sites. Does anyone have a good recommendation to protect my clients sites? thanks
We have tried all different kinds of security plugins but none seem to work long term.
Technical SEO | | Carla_Dawson0 -
I accidentally blocked Google with Robots.txt. What next?
Last week I uploaded my site and forgot to remove the robots.txt file with this text: User-agent: * Disallow: / I dropped from page 11 on my main keywords to past page 50. I caught it 2-3 days later and have now fixed it. I re-imported my site map with Webmaster Tools and I also did a Fetch as Google through Webmaster Tools. I tweeted out my URL to hopefully get Google to crawl it faster too. Webmaster Tools no longer says that the site is experiencing outages, but when I look at my blocked URLs it still says 249 are blocked. That's actually gone up since I made the fix. In the Google search results, it still no longer has my page title and the description still says "A description for this result is not available because of this site's robots.txt – learn more." How will this affect me long-term? When will I recover my rankings? Is there anything else I can do? Thanks for your input! www.decalsforthewall.com
Technical SEO | | Webmaster1230