Crawl errors are still shown after fixed
-
Fixed long ago "title too long" and some 404 errors, but still keep on showing on error statistics
-
WMT is slow as heck to have those things go away. it may take 3 months to roll off. FYI, if there is a 404 error and it is supposed to be a 404, do not mark as fixed. Otherwise Google will think you "fixed" the 404 back to a 200, recrawl and then put the 404 back in the GWT errors. Just let the 404s roll off over time.
-
On Google Webmaster Tools or Moz?
On WMT, you can just mark things you fixed as fixed, and they will go away immediately and won't come back unless Google sees them again. For Moz, you should just have to wait till the next crawl.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Solved Duplicate content error affecting 142 pages
Hello,
Product Support | | EISMarketing
Recently I noticed a new duplicate error notification.
This page: https://www.earley.com/insights/internet-things-and-product-data
is flagged as 'duplicate content' with 142 affected pages.
Here's an example of one of the affected pages:
https://www.earley.com/insights/how-ontologies-drive-digital-transformation
This is not an ecommerce site. The affected pages are blog posts. We are pretty prolific writers and over the years we have produced nearly 300 articles. We are a consulting firm and the articles are about our area of expertise and cover a wide range of topics within that space.
I just don't understand why this would be flagged as duplicate or what I'm supposed to do with this information!
Help!
Thanks!
Sharon0 -
Solved Site Crawl Won't Complete
How can I start/restart a new site crawl? I requested one 2 days ago on one of my sites, and it won't complete. It's only 150 pages -
Product Support | | PaulBarrs0 -
Crawling issue
Hello,
Product Support | | Benjamien
I have added the campaign IJsfabriek Strombeek (ijsfabriekstrombeek.be) to my account. After the website had been crawled, it showed only 2 crawled pages, but this site has over 500 pages. It is divided into four versions: a Dutch, French, English and German version. I thought that could be the issue because I only filled in the root domain ijsfabriekstrombeek.be , so I created another campaign with the name ijsfabriekstrombeek with the url ijsfabriekstrombeek.be/nl . When MOZ crawled this one, I got the following remark:
**Moz was unable to crawl your site on Feb 21, 2018. **Your page redirects or links to a page that is outside of the scope of your campaign settings. Your campaign is limited to pages with ijsfabriekstrombeek.be/nl in the URL path, which prevents us from crawling through the redirect or the links on your page. To enable a full crawl of your site, you may need to create a new campaign with a broader scope, adjust your redirects, or add links to other pages that include ijsfabriekstrombeek.be/nl. Typically errors like this should be investigated and fixed by the site webmaster. I have checked the robots.txt and that is fine. There are also no robots meta tags in the code, so what can be the problem? I really need to see an overview of all the pages on the website, so I can use MOZ for the reason that I prescribed, being SEO improvement. Please come back to me soon. Is there a possibility that I can see someone sort out this issue through 'Join me'? Thanks0 -
Rogerbot not crawling our site
Has anyone else had issues with Roger crawling your site in the last few weeks? It shows only 2 pages crawled. I was able to crawl the site using Screaming Frog with no problem and we are not specifically blocking Roger via robots.txt or any other method. Has anyone encountered this issue? Any suggestions?
Product Support | | cckapow0 -
Why can I not crawl this site
I wanted to add this site as new campaign: new.kbc.be But it won't accept it. Why?
Product Support | | KBC0 -
Crawl Limit Question
I'm a little confused as to how the crawl limit works. Since there seems to be a 10K per week max, the crawl limit can't be per week, so what is the time period? Also, does that include crawling sites entered as competitors? Right now I'm at 14/25 sites and most of them are under 1,000 pages so I'm not sure how I hit that limit (other than a one-time spike of 28,000 in November).
Product Support | | David_Moceri0 -
Duplicate Content Report: Duplicate URLs being crawled with "++" at the end
Hi, In our Moz report over the past few weeks I've noticed some duplicate URLs appearing like the following: Original (valid) URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green Duplicate URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green**++** These aren't appearing in Webmaster Tools, or in a Screaming Frog crawl of our site so I'm wondering if this is a bug with the Moz crawler? I realise that it could be resolved using a canonical reference, or performing a 301 from the duplicate to the canonical URL but I'd like to find out what's causing it and whether anyone else was experiencing the same problem. Thanks, George
Product Support | | webmethod0