Why does my site have so many crawl errors relating to the wordpress login / captcha page
-
Going through the crawl of my site, there were around 100 medium priority issues, such as title element too short, and duplicate page title, and around 80 high priority issues relating to duplicate page content - However every page listed with these issues was the site's wordpress login / captcha page. Does anyone know how to resolve this?
-
Thank you all, have disallowed it in the Robots.txt, will wait till the next crawl and see if this has resolved the problem.
Cheers!
-
Hi there! Tawny from Moz's Help Team here.
It looks like you've already got some good answers here, but if you're still looking for clarification or a bit more help, feel free to write in to help@moz.com with the details of your campaign and we'll do what we can to sort things out!
-
Mike and Dmytro answered this really well. It should be blocked in robots.txt
But also, you might be linking to your login page publicly. I often see links to login or for "admin" in wordpress themes in a sidebar, widget or footer. You should probably remove those as well (unless you allow public users to create their own account and log in).
-
I agree with Mike. You shouldn't really allow bots to try and access wp-admin / captcha pages.
I would suggest adding the following line in your Robots.txt file: Disallow: /wp-admin/
You can also do the same to the captcha page or noindex the page.
-
Normally I would NoIndex and/or disallow in Robots.txt a page like that.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How bad is it to have duplicate content across http:// and https:// versions of the site?
A lot of pages on our website are currently indexed on both their http:// and https:// URLs. I realise that this is a duplicate content problem, but how major an issue is this in practice? Also, am I right in saying that the best solution would be to use rel canonical tags to highlight the https pages as the canonical versions?
Technical SEO | | RG_SEO0 -
Redirecting root domain to a page based on user login
We have our main URL redirecting non-logged in users to a specific page and logged in users are directed to their dashboard when going to the main URL. We find this to be the most user-friendly, however, this is all being picked up as a 302 redirect. I am trying to advise on the ideal way to accomplish this, but I am not having much luck in my search for information. I believe we are going to put a true homepage at the root domain and simply redirect logged in users as usual when they hit the URL, but I'm still concerned this will cause issues with Google and other search engines. Anyone have experience with domains that need to work in this manner? Thank you! Anna
Technical SEO | | annalytical0 -
How to create site map for large site (ecommerce type) that has 1000's if not 100,000 of pages.
I know this is kind of a newbie question but I am having an amazing amount of trouble creating a sitemap for our site Bestride.com. We just did a complete redesign (look and feel, functionality, the works) and now I am trying to create a site map. Most of the generators I have used "break" after reaching some number of pages. I am at a loss as to how to create the sitemap. Any help would be greatly appreciated! Thanks
Technical SEO | | BestRide0 -
Would posting content into these sites be a good boost related to authority?
Hi, Would posting content into these sites be a good boost related to authority? Press releases PRWebPRLeapArticlesthetechscoop.netthecampussocialite.comtechi.combusiness2community.commediaite.comexaminer.commakezine.comhuffingtonpost.comAll these site charge to post is it worth?Thanks
Technical SEO | | mtthompsons1 -
Crawl errors which ones should i sort out
Hi, just had my website updated to joomla 3.0 and i have around 4000 urls not found. now i have been told i need to redirect these but i would just like to check on here to make sure i am doing the right thing and the advice i have been given is not correct. I have been told these errors are the reason for the drop in rankings. I need to know if i should redirect all of these 4,000 urls or only the ones that are being linked to from outside of the site. I think about 3,000 of these have no links from outside of the site, but if i do not redirect them all then i am going to keep getting the error messages. around 2,000 of these url not found are from the last time we updated the site which was a couple of years ago and i thought they would have died off now. any advice on what i should do would be great
Technical SEO | | ClaireH-1848860 -
Too Many On-Page Links on a Blog
I have a question about the number of on-page links on a page and the implications on how we're viewed by search engines. After SEOmoz crawls our website, we consistently get notifications that some of our pages have "Too Many On-Page Links." These are always limited to pages on our blog, and largely a function of our tag cloud (~ 30 links) plus categories (10 links) plus popular posts (5 links). These all display on every blog post in the sidebar. How significant a problem is this? And, if you think it is a significant problem, what would you suggest to remedy the problem? Here's a link to our blog in case it helps: http://wiredimpact.com/blog/ The above page currently is listed as having 138 links. Any advice is much appreciated. Thanks so much. David
Technical SEO | | WiredImpact0 -
How does Google Crawl Multi-Regional Sites?
I've been reading up on this on Webmaster Tools but just wanted to see if anyone could explain it a bit better. I have a website which is going live soon which is going to be set up to redirect to a localised URL based on the IP address i.e. NZ IP ranges will go to .co.nz, Aus IP addresses would go to .com.au and then USA or other non-specified IP addresses will go to the .com address. There is a single CMS installation for the website. Does this impact the way in which Google is able to search the site? Will all domains be crawled or just one? Any help would be great - thanks!
Technical SEO | | lemonz0 -
How to avoid 404 errors when taking a page off?
So... We are running a blog that was supposed to have great content. Working at SEO for a while, I discovered that is too much keyword stuffing and some SEO shits for wordpress, that was supposed to rank better. In fact. That worked, but I'm not getting the risk of getting slaped by the Google puppy-panda. So we decided to restard our blog from zero and make a better try. So. Every page was already ranking in Google. SEOMoz didn't make the crawl yet, but I'm really sure that the crawlers would say that there is a lot of 404 errors. My question is: can I avoid these errors with some tool in Google Webmasters in sitemaps, or shoud I make some rel=canonicals or 301 redirects. Does Google penalyses me for that? It's kinda obvious for me that the answer is YES. Please, help 😉
Technical SEO | | ivan.precisodisso0