Huge in crease in on page errors
-
Hi guys I’ve just checked my online campaign and I see errors in my crawl diagnostics have almost doubled from the 21<sup>st</sup> of October to the 25<sup>th</sup> of October going from 6708 errors to 11 599. Can anyone tell me what may have caused this?
Also I notice we have a lot of issues with duplicate page titles which seems strange as no new pages have been added, can anyone explain why this might be?
I look forward to hearing from you
-
Just noting that this discussion continues here:
-
Hi Matt,
I hope things are going well. I'm just following up on the duplicate page issue .I have spoken to our web company and they have correct the issue last week and I noted the amount of duplicates dropped significantly. Ive just checked today and i see it's back up (I'm following this up with our web company). Are you able to offer any insights as to why this problem seems to reoccur
I was of the understanding this is a permanent fix so once the change has been made, I cant understand why it then seems to reoccur?. Any insights would be much appreciated.
Regards
Pete
-
Well, it stands to reason that something must've changed is order to cause such a huge increase. Looking through the list of duplicate URLs, I'm seeing a lot that could be fixed by rel="canonical". There's enough of them that adding a canonical link to each would be a huge undertaking or require some careful coding. I'm wondering if this increase could've been partially caused by someone removing rel="canonical" from a lot of pages.
For example, I'm seeing a lot of this:
http://www.health2000.co.nz/shop/aromatherapy/lemongrass-essential-oil/P4494/C56
vs.
http://www.health2000.co.nz/shop/aromatherapy/lemongrass-essential-oil/p4494/c56
The only difference between those URLs is capitalization. The first, capitalized version is the one that appears on your XML sitemap. I'm not 100% sure why both versions would be appearing to Roger—it may be an issue with the CMS—but a rel="canonical" on the former pointing to the latter would solve that problem.
Now, that doesn't look to be the only issue, but it _is _a large one.
Let me know what you find out!
-
That's ok Matt. I've put those two questions to our web company, although I don't think any changes happened then, although I do know that they did work on the 27th of September. I m fairly sure in was rel =canonical in nature. I have asked them to confirm and will let you know in due course. As an aside why do you think the changes you mentioned would be of effect on our web site?
-
Hi Pete,
Sure thing. Sorry it's taken so long!
May I ask what, if any, changes were made on the site between the 21st and 25th of October? In particular, were there any changes made involving rel="canonical"?
-
Hi Matt,
Ive just had a look an I can now see the amount of pages, crawled on our site, is comparable to our competitors, which is great!
However the amount of on page errors is significantly higher. In particularly the amount of duplicate errors is about 10 000 which is the same amount we had, before our web company fixed this issue. Are you able to give me any feed back as to what's happened there?. Thanks again for your help with this!
Pete
-
Hi Matt,
No probs, I look forward to hearing from you
Regards
Pete
-
That's awesome Keri, thanks for following this up : )
-
Hi Pete,
Sorry for the delay! I just wanted to let you know I'm looking into it, and should get back to you shortly.
Matt
-
Hi Pete,
This is going to need a bit more digging than I can do from where I sit. I'm going to ask a colleague of mine to come in and lend you a hand. Thanks for your patience!
Keri
-
Thanks Keri here is our site http://www.health2000.co.nz we have recently asked our IT company to make amendments to offset the duplicates page issue.
The attached graphic shows the problem was in recline but now it seems to have come back. Any idea why that might be? I would have thought 301 redirecting would be an all or nothing solution. Also Ive asked our IT company and they have said this may take a while for google to indexes our page. If that is correct , how long do you think will take?
I’ve set up campaigns for our organisation and four of our competitors and note that on average we have had 6500 pages crawled where as our competitors have over 11000 pages crawled is there any reason why that might be? Thanks again for your help!
Pete
-
If you give him a 301 redirect it should help him, and the search engines, which is the most important part.
If you can touch base with your IT team and see if they changed something and ask them to change it back, that'd be a good place to start. If you can share your URL here, we can look at it and help direct you to the easiest way to fix things (if it is the www and non-www problem), or help identify the source of the problem.
-
Hi Keri,
Thanks for drooping me a line...... How do we make that cheeky little robot unfind one of them? : )
Cheers
Pete
-
Hmmm...my first thought is that if it's sudden duplicate content and doubling of errors is that perhaps Roger found both the www and non-www versions of the site?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
MOZscape API Signup "An unknown error has occurred"
Hello, I am not able to signup for MOZscape API, I am getting error while signing up for MOZScape API under free trial. https://moz.com/checkout/api --> Getting error here, please help. Thanks.
API | | rahul2k11in0 -
Sample API call error
from mozscape import Mozscape client = Mozscape('xxxxx', 'xxxxx') # Now for some anchor text results anchorResults = client.anchorText('http://www.moz.com') # Or for just specific columns anchorTermResults = client.anchorText('http://www.moz.com', cols=Mozscape.ATCols.term) this is the error I am getting **mozscape.MozscapeError: HTTP Error 403: Permission denied: fpalgkadgnamhiblgpakemcfeedbebdcfk** My python is installed in appdata/local/programs and it is an enterprise environment. Thank you in advance!
API | | Sreenivas.Bathula0 -
MOZ API - metadata permission error
Hi there, I have a small issue using MOZ API. I cannot make successful GET request for metadata (index stats). The request looks like this: http://lsapi.seomoz.com/linkscape/metadata/index_stats?AccessID=mozscape-XXXXXX&Expires=1497442682&Signature=XXXXXX. In my most recent request, I got an error message: "Permission denied: pckgmhkomakbnflgmckhobiiojjmomjpph". Did I forget something? Signed Authentication should be OK, because it works with other endpoints, such as "url-metrics", "links" etc. Thanks in advance! Kind regards, Zoran
API | | Databox-apps1 -
Error Code 803
Hello all, With every new Moz report that comes in, an error code 803 appears. I check each link that comes up as an 803 but they work perfectly. What could be causing this to happen? Error Code 803: Incomplete HTTP Response Received Your page did not send a complete response to the Moz Crawler's request. Our crawler is pretty standard, so it's likely that other browsers and crawlers may also encounter this error. If you have this error on your homepage, it prevents the Moz crawler (and some search engines) from crawling the rest of your site.
API | | moon-boots0 -
Error when generating secret API key
Hi, I am trying to generate an API key for the past 24 hours and I keep on getting the same vague explanation (attached below) of the error. Help Hub had no responses regarding the error also. "Oops - Something went wrong while trying to get your API Credentials. Please try again or check out the Help Hub if you are still experiencing issues." Appreciate any assistance on solving this issue. Thanks! Ian
API | | kwaken0 -
803 Crawl attempt error
Hello I'd be very grateful for any advice with this: My insights show I have an 803 error. Now, under "pages with crawl attempt error" the page in question is just an uploaded image to wordpress. However, above the graph it says: "We were unable to access your homepage, which prevented us from crawling the rest of your site. It is likely that other browsers as well as search engines may encounter this problem and abort their sessions." Does this really mean my homepage? or is the only issue with the image? I have noticed for the past 8 weeks I'm getting 1 crawl attempt error every 2 weeks (so when viewed weekly I have 1 error one week, 0 error the next week etc) Is this normal? Since receiving this 803 error, I have significantly dropped in SERPS for 3 key terms I was on page 1 for (now dropped to pages 3-4). Could this be related? I realise this is a bit specific, but thanks in advance. Cheers 🙂
API | | wearehappymedia0 -
Bulk Page Authority Tracking
Hi Is there a way in Moz to identify your page authority by landing page, possibly crawling the site and providing this in bulk so you don't have to go through and check each page? I want to track how my page authority for certain pages moves over time. Thank you
API | | BeckyKey0 -
Duplicate description error: one for meta one for og:type
I am getting the duplicate description error from Moz. I use both the og:description and . I am not sure if that is going to get me penalized by the search engines or my pages somehow discounted if I have meta description and og:description on the same page. What does Moz recommend? NOTE: for years I have followed this in the best practices format put out from other sources. Found at: http://webmasters.stackexchange.com/questions/52600/should-i-have-ogdescription-and-meta-description-together-on-every-page Short Answer: Use both! Long Answer: The OG stands for Open Graph which is apart of the Open Graph protocol of which works on platforms such as Facebook. The meta description element is for search engines such as Google, Yahoo and Bing. Since these are two separate tags that kinda do the same thing but they are designed for different types of platforms, one for Facebook and the other for Search Engines. The reasoning behind this is that the Open Graph protocal is more rich in what content can be feed to Facebook without scrapping the full page, think rich snippets. So images, description and more information is feed to Facebook via the Open Graph. Using both is a good idea.
API | | jessential0