URL Errors Help - 350K Page Not Founds in 22 days
-
Got a good one for you all this time...
For our site, Google Search Console is reporting 436,758 "Page Not Found" errors within the Crawl Error report.
This is an increase of 350,000 errors in just 22 days (on Sept 21 we had 87,000 errors which was essentially consistently at that number for the previous 4 months or more). Then on August 22nd the errors jumped to 140,000, then climbed steadily from the 26th until the 31st reaching 326,000 errors, and then climbed again slowly from Sept 2nd until today's 436K.
Unfortunately I can only see the top 1,000 erroneous URLs in the console, of which they seem to be custom Google tracking URLs my team uses to track our pages.
A few questions:
1. Is there anyway to see the full list of 400K URLs Google is reporting they cannot find?
2. Should we be concerned at all about these?
3. Any other advice?thanks in advance!
C
-
No problem! Please let us know if you need any help once you have your results.
-
thank you all for the feedback. A comprehensive deep crawl is being conducted on the site now to help find out more. I truly appreciate all your guidance.
best
CC
-
I'm guessing this is for a news or ecommerce site? That is a lot of URLs.
Screaming Frog is a good resource, but I would look at the format of the URLs and how your platform creates URLs. I remember years ago many people were having issues with Wordpress, Joomla and other CMS's creating alternate URLs without the publisher knowing about them. Most likely its a setting in your system. Take a look at the URL settings and also the URLs that the tracking software is stating that it cannot find. Look for patterns across URLs and categories. You may find what you are looking for.
-
Not sure if this is related but myself and someone else have seen something similar around the same time happen, see here: https://moz.com/community/q/strange-increase-of-pages-not-found-gwt
-
Hi Usnseomoz,
1. Maybe perform a deeper search via a bit of kit like ScreamingFrog. This should help to further highlight any missing pages / errors etc
2. I would always be concerned with any problem until you have either been able to resolve it or discount it. Sounds like it could be the URL tracking parameters which are causing you some issues, especially if you are tracking users for multiple sources / sales affiliates. If they are solely used for tracking and no other purpose I would consider adding these parameter vairables to the crawl filters.
Side menu >> Crawl >> Url Parameters
https://support.google.com/webmasters/answer/6080548?rd=1Hope this is of some use
Cheers
Tim
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Migrating From Parameter-Driven URL's to 'SEO Friendly URL's (Slugs)
Hi all, hope you're all good and having a wonderful Friday morning. At the moment we have over 20,000+ live products on our ecomms site, however, all of the products are using non-seo friendly URL's (/product?p=1738 etc) and we're looking at deploying SEO friendly url's such as (/product/this-is-product-one) etc. As you could imagine, making such a change on a big ecomms site will be a difficult task and we will have to take on A LOT of content changes, href-lang changes, affiliate link tests and a big 301 task. I'm trying to get some analysis together to pitch the Tech guys, but it's difficult, I do understand that this change has it's benefits for SEO, usability and CTR - but I need some more info. Keywords in the slugs - what is it's actual SEO weight? Has anyone here recently converted from using parameter based URL's to keyword-based slugs and seen results? Also, what are the best ways of deploying this? Add a canonical and 301? All comments greatly appreciated! Brett
Intermediate & Advanced SEO | | Brett-S0 -
Duplicate page content on numerical blog pages?
Hello everyone, I'm still relatively new at SEO and am still trying my best to learn. However, I have this persistent issue. My site is on WordPress and all of my blog pages e.g page one, page two etc are all coming up as duplicate content. Here are some URL examples of what I mean: http://3mil.co.uk/insights-web-design-blog/page/3/ http://3mil.co.uk/insights-web-design-blog/page/4/ Does anyone have any ideas? I have already no indexed categories and tags so it is not them. Any help would be appreciated. Thanks.
Intermediate & Advanced SEO | | 3mil0 -
Why are "noindex" pages access denied errors in GWT and should I worry about it?
GWT calls pages that have "noindex, follow" tags "access denied errors." How is it an "error" to say, "hey, don't include these in your index, but go ahead and crawl them." These pages are thin content/duplicate content/overly templated pages I inherited and the noindex, follow tags are an effort to not crap up Google's view of this site. The reason I ask is that GWT's detection of a rash of these access restricted errors coincides with a drop in organic traffic. Of course, coincidence is not necessarily cause. Should I worry about it and do something or not? Thanks... Darcy
Intermediate & Advanced SEO | | 945010 -
Does Google Read URL's if they include a # tag? Re: SEO Value of Clean Url's
An ECWID rep stated in regards to an inquiry about how the ECWID url's are not customizable, that "an important thing is that it doesn't matter what these URLs look like, because search engines don't read anything after that # in URLs. " Example http://www.runningboards4less.com/general-motors#!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 Basically all of this: #!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 That is a snippet out of a conversation where ECWID said that dirty urls don't matter beyond a hashtag... Is that true? I haven't found any rule that Google or other search engines (Google is really the most important) don't index, read, or place value on the part of the url after a # tag.
Intermediate & Advanced SEO | | Atlanta-SMO0 -
What is happening with this page's rankings? (G Analytics screenprint attached) help me.
Hi, At the moment im confused. I have a page which shows up for the query 'bank holidays' first page solid for 2 years - this also applies to the terms 'mothers day', 'pancake day' and a few others (UK Google). And there still ranking. Here is the problem: Usually I would rank for 'bank holidays 2014' (the terms with the year in are the real traffic drivers) and would be position 3/5. Over the last 3 months this has decayed dropping position to 30+. From the screenprint you can see the term 'Bank Holidays' is holding on but the term 'bank holidays 2014' is slowly decaying. If you query 'bank holidays 2015' we don't appear in rankings at all. What is causing this? The content is ok, social sharing happens and the odd link is picked up hear and there. I need help, how do I start pushing this back in the other direction, its like the site is slowly dying. And what really kills me, is 2 pages are ranking on page1 off link farms. URL: followuk.co.uk/bank-holidays serp-decay.jpg
Intermediate & Advanced SEO | | followuk0 -
HELP! How does one prevent regional pages as being counted as "duplicate content," "duplicate meta descriptions," et cetera...?
The organization I am working with has multiple versions of its website geared towards the different regions. US - http://www.orionhealth.com/ CA - http://www.orionhealth.com/ca/ DE - http://www.orionhealth.com/de/ UK - http://www.orionhealth.com/uk/ AU - http://www.orionhealth.com/au/ NZ - http://www.orionhealth.com/nz/ Some of these sites have very similar pages which are registering as duplicate content, meta descriptions and titles. Two examples are: http://www.orionhealth.com/terms-and-conditions http://www.orionhealth.com/uk/terms-and-conditions Now even though the content is the same, the navigation is different since each region has different product options / services, so a redirect won't work since the navigation on the main US site is different from the navigation for the UK site. A rel=canonical seems like a viable option, but (correct me if I'm wrong) it tells search engines to only index the main page, in this case, it would be the US version, but I still want the UK site to appear to search engines. So what is the proper way of treating similar pages accross different regional directories? Any insight would be GREATLY appreciated! Thank you!
Intermediate & Advanced SEO | | Scratch_MM0 -
How to associate content on one page to another page
Hi all, I would like associate content on "Page A" with "Page B". The content is not the same, but we want to tell Google it should be associated. Is there an easy way to do this?
Intermediate & Advanced SEO | | Viewpoints1 -
Should I build links to the home page or a url containing the keyword?
I run an IT company and the company name does not contain the key word I am trying to rank on. I also have a bunch of pages with page rank that containing the actual keywords, for example: http://www.mycompanyname.com/tech-support/locations/brighton My target keyword is "Tech Support Brighton" My Home page is PR4 and my location based pages are PR3. My plan was to build 3 or 4 location pages for the locations we provide tech support for and target location based keyword Anchor text to these URL's e.g "Tech Support Brighton" and then for the home page build links that have the anchor text "Tech Support". Does this sound sane? Many Thanks, K
Intermediate & Advanced SEO | | SEOKeith0