Why does my site have so many crawl errors relating to the wordpress login / captcha page
-
Going through the crawl of my site, there were around 100 medium priority issues, such as title element too short, and duplicate page title, and around 80 high priority issues relating to duplicate page content - However every page listed with these issues was the site's wordpress login / captcha page. Does anyone know how to resolve this?
-
Thank you all, have disallowed it in the Robots.txt, will wait till the next crawl and see if this has resolved the problem.
Cheers!
-
Hi there! Tawny from Moz's Help Team here.
It looks like you've already got some good answers here, but if you're still looking for clarification or a bit more help, feel free to write in to help@moz.com with the details of your campaign and we'll do what we can to sort things out!
-
Mike and Dmytro answered this really well. It should be blocked in robots.txt
But also, you might be linking to your login page publicly. I often see links to login or for "admin" in wordpress themes in a sidebar, widget or footer. You should probably remove those as well (unless you allow public users to create their own account and log in).
-
I agree with Mike. You shouldn't really allow bots to try and access wp-admin / captcha pages.
I would suggest adding the following line in your Robots.txt file: Disallow: /wp-admin/
You can also do the same to the captcha page or noindex the page.
-
Normally I would NoIndex and/or disallow in Robots.txt a page like that.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to resolve warning of pages with redirect chain when its your http:// to https://www.
how do I write a 301 redirect in the htaccess file so that http:// goes straight to https://www. Moz replyEli profileHey there!Thanks for reaching out to us!
Technical SEO | | VelocityWebsites0 -
A crawl revealed two home pages
After doing a site crawl using the moz tool, I have found two home pages-www.domain.com/ and www.domain.com. Both URLS have the exact same metrics and I have set a preferred domain name in google, will this hurt seo? Should I claim the www.domain.com/ as well as www.domain.com and domain.com in the search console? Thanks
Technical SEO | | Tom3_150 -
Assistance with High Priority Duplicate Page Content Errors
Hi I am trying to fix the high priority duplicate content URL's from my recent MOZ crawl (6 URL's) in total. Would someone from the community be able to offer some web development advice? I had reached out on the Moz Community on the main welcome page. Samantha stated that someone in web development on Moz's Q&A forum would be better suited to assist me. I took a word press class on Lynda.com, but other than that, I am a novice. I manage my site www.rejuvalon.com on Go Daddy's managed wordpress site. Thanks so much for your help! Best, Jill
Technical SEO | | justjilly0 -
How come only 2 pages of my 16 page infographic are being crawled by Moz?
Our Infographic titled "What Is Coaching" was officially launched 5 weeks ago. http://whatiscoaching.erickson.edu/ We set up campaigns in Moz & Google Analytics to track its performance. Moz is reporting No organic traffic and is only crawling 2 of the 16 pages we created. (see first and third attachments) Google Analytics is seeing hundreds of some very strange random pages (see second attachment) Both campaigns are tracking the url above. We have no idea where we've gone wrong. Please help!! 16_pages_seen_in_wordpress.png how_google_analytics_sees_pages.png what_moz_sees.png
Technical SEO | | EricksonCoaching0 -
What should I do with URLs that cause site map errors?
Hi Mozzers, I have a client who uses an important customer database and offers gift cards via https://clients.mindbodyonline.com located within the navigation which causes sitemap errors whenever it is submitted since domain is different. Should I ask to remove those links from navigation? if so where can I relocate those links? If not what should I do to have a site map without any errors? Thanks! 1n16jlL.png
Technical SEO | | Ideas-Money-Art0 -
Wordpress 4xx errors from comment re-direct
About a month ago, we had a massive jump in 4XX errors. It seems the majority are being caused by the comment tool on wordpress, which is generating a link that looks like this "http://www.turnerpr.com/blog/wp-login.php?redirect_to=http%3A%2F%2Fwww.turnerpr.com%2Fblog%2F2013%2F09%2Fturners-crew-royal-treatment-well-sort-of%2Fphoto-2-2%2F" On every single post. We're using Akismet and haven't had issues in the past....and I can't figure out the fix. I've tried turning it off and back on; I'm reluctant to completely switch commenting systems because we'd lose so much history. Anyone seen this particular re-direct love happen before? Angela
Technical SEO | | TurnerPR0 -
Diagnostic says too many links on a page and most of the pages are from blog entries. Are tags considered links? How do I decrease links?
I just ran my first diagnostic on my site and the results came back were negative in the area of too many links one a page. There were also quite a few 404 errors. What is the best way to fix these problems? Most of the pages with too many links are from blog posts, are the tags counted as well and is this the reason for too many links?
Technical SEO | | Newport10300 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0