Crawl Errors from URL Parameter
-
Hello,
I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login.
I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls.
Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't.
So what I ended doing was going to the robots.txt and disallowing rogerbot.
It looks like this:
User-agent: rogerbot
Disallow:/login
However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix?
Thanks!
-
Hi Tony,
I need more information from you in order to check this out. I'm going to send you a support ticket to reply to.
Thanks,
Joel.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
If links have been disavowed, do they still show in crawl reports?
I have a new client who says they have disavowed all their bad links, but I still see a bunch of spammy backlinks in my external links report. I understand that disavow does not mean links are actually removed so will they continue to show in Google Webmaster Tools and in my Moz reports? If so, how do I know which ones have been disavowed and which have not? Regards, Dino
Moz Pro | | Dino640 -
What is error message:social_account.no_method?
When trying to add a twitter account to track, I received the following error message:social_account.no_method - has anyone else received this message?
Moz Pro | | solutionbuilt.com0 -
404: Error - MBP Ninja Affiliate
Hello, I use the plugin MBP Ninja Affiliate to redirect links. I did Crawl Diagnostics and it appears 404: Error, but the link is working, it exists. Why Crawl Diagnostics appear 404: Error?
Moz Pro | | antoniojunior0 -
Does Open Site Explorer purposefully not crawl some sites?
I use both SEOmoz's Open Site Explorer and Web Master Tools to find backlinks when conducting link audits. WMT always finds more links than OSE; I understand Google's database is bigger. But what is interesting to me is that it seems that a large percentage of the links WMT finds that OSE does not are real crappy links that I don't want. That makes me wonder if SEOmoz decides not to crawl certain, low quality, sites? Just curious.
Moz Pro | | ILM_Marketing0 -
Duplicate content error?
I am seeing an error for duplicate content for the following pages: http://www.bluelinkerp.com/contact/ http://www.bluelinkerp.com/contact/index.asp Doesn't the first URL just automatically redirect to the default page in that directory (index.asp)? Why is it showing up as separate duplicate pages?
Moz Pro | | BlueLinkERP0 -
How long does it take for a campaign website crawl to be completed?
Our campaign website crawl has been 'crawling' now for 5 days. Is this a normal phenomenon or is something hanging up?
Moz Pro | | Discountvc0 -
Site explore reporting error over week
unable to dispaly anchor text error Doh! Roger is still working out the kinks with the new index and is having issues untangling anchor text data. We're currently showing anchor text data from the previous index, but we will update as soon as we can.
Moz Pro | | 1step2heaven120