404s in GWT - Not sure how they are being found
-
We have been getting multiple 404 errors in GWT that look like this: http://www.example.com/UpdateCart.
The problem is that this is not a URL that is part of our structure, it is only a piece. The actual URL has a query string on the end, so if you take the query string off, the page does not work.
I can't figure out how Google is finding these pages. Could it be removing the query string?
Thanks.
-
Kelli - the first thing I thought was what garfield_disliker asks: have you set up Google Webmaster Tools to ignore these parameters that are important for the cart page to load?
That said, Google Webmaster Tools is run by a team that's separate from the primary search team, so it's possible that GWT is flagging an issue that isn't an actual issue for Google. Run a search in Google for "site:yourdomain.com/UpdateCart" and see what URLs Google has indexed. If they have that 404ing URL, that's not good. If they have correct URLs, it's possible that this is a Google Webmaster Tools thing.
-
Hi,
Are you using the /updateCart url in goal tracking or pushing events to analytics using this url? I have seen GWT pick up 404's from us pushing virtual (non existing) page views to analytics for goal tracking etc. Just a thought.
-
First, you can never be sure there are no external links. Open Site Explorer's index (and any other link analysis tool) is not a full picture, and Google doesn't always provide all the inbound links to your site. The junkier the scraper, the less likely you will see the link.
Secondly, could you provide a concrete example of this?
Where is the page (with parameters) linked from/to on your site? How is your site appending those parameters to the URL? Does it send users through a redirect to get to that URL? It might be useful to run your own crawl (w/ Screaming Frog or any other crawling software) of the site and take a look at all the internal links and the response codes.
Also have you set up Google WMT to ignore any parameters?
It's certainly possible that Google's crawlers are stripping parameters on their own.
-
We do not dynamically inject canonicals into the page. They are also not old URLs because they have never been valid URLs.
They are all linked from internal pages, but when I look at those pages, the URL with the query string is the only URL that is being pointed to, not the partial URL. There are no external links.
Thanks,
Kelli -
In WMT click on the URL that is 404'd and then select "linked to from". It will show you where Google is picking up the 404 error.
Are these 404 pages being linked to from an external site? Sometimes the 404s that appear in WMT are from links pointing to your domain from an external site, often one that has scraped your site.
-
Does your website dynamically inject canonical links into the page? Some content management systems will automatically generate canonicals that strip parameters from the URL. If that's the case then that might be why you wouldn't see it in your ordinary site structure.
It's also possible that it's an old URL that Google indexed which is no longer on your site or something that is linked externally somewhere, so the crawlers are finding it somewhere off site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help: Blog post translations resulting in 404 Not Found?
A client set up a website that has multilingual functionality (WPML) and the back end is a bit of a mess. The site has around 6 translated versions of the 30 or so existing English blog posts in French, Italian and Spanish - all with their own URLs. The problem is that on the remaining 24 English blog posts, the language changer in the header is still there - even though the majority of posts have not been translated - so when you go to change the language to French, it adds **?lang=fr **onto the existing english URL, and is a page not found (4xx client error). I can't redirect anything because the page does not exist. Is there a way to stop this from happening? I have noticed it's also creating italian/french/spanish translation of the english Categories too. Thanks in advance.
Technical SEO | | skehoe0 -
On our site by mistake some wrong links were entered and google crawled them. We have fixed those links. But they still show up in Not Found Errors. Should we just mark them as fixed? Or what is the best way to deal with them?
Some parameter was not sent. So the link was read as : null/city, null/country instead cityname/city
Technical SEO | | Lybrate06060 -
404s effecting crawl rate?
We made a change to our site where we all of a sudden we are creating a large number of 404 pages. Is this effecting the crawl/indexing rate? Currently we've submitted 3.4 million pages, have over 834K indexed but have over and 330K pages not found. Since the large increase in 404s we've noticed a decrease in pages crawled per day. I found this Q & A in Webmasters (http://googlewebmastercentral.blogspot.com/2011/05/do-404s-hurt-my-site.html) but it seems like the 404s should not have an effect. Is this article out of date? What do you think fellow Moz-ers? Is this a problem?
Technical SEO | | JoshKimber0 -
GWT Images Indexing
Hi guys! How does normally take to get Google to index the images within the sitemap? I recently submitted a new, up to date sitemap and most of the pages have been indexed already, but no images have. Any reason for that? Cheers
Technical SEO | | PremioOscar0 -
Google rankings strange behaviour - our site can only be found when searching repeatedly
Hello, We are experiencing something very odd at the moment I hope somebody could shed some light on this. The rankings of our site dropped from page 2 to page 15 approx. 9 months ago. At first we thought we had been penalised and filed a consideration request. Google got back to us saying that there was no manual actions applied to our site. We have been working very hard to try to get the ranking up again and it seems to be improving. Now, according to several serps monitoring services, we are on page 2/3 again for the term "holiday lettings". However, the really strange thing is that when we search for this term on Google UK, our site is nowhere to be found. If you then right away hit the search button again searching for the same term, then voila! our website is on www.alphaholidaylettings.com page 2 / 3! We tried this on many different computers at different locations (private and public computers), making sure we have logged out from Google Accounts (so that customised search results are not returned). We even tried the computers at various retail outlets including different Apple stores. The results are the same. Essentially, we are never found when someone search for us for the first time, our site only shows up if you search for the same term for the second or third time. We just could not understand why this is happening. Somebody told me it could be due to "Google dance" when indices on different servers are being updated, but this has now been going on for nearly 3 months. Has anyone experienced similar situations or have any advice? Many thanks!
Technical SEO | | forgottenlife0 -
Google (GWT) says my homepage and posts are blocked by Robots.txt
I guys.. I have a very annoying issue.. My Wordpress-blog over at www.Trovatten.com has some indexation-problems.. Google Webmaster Tools data:
Technical SEO | | FrederikTrovatten22
GWT says the following: "Sitemap contains urls which are blocked by robots.txt." and shows me my homepage and my blogposts.. This is my Robots.txt: http://www.trovatten.com/robots.txt
"User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/ Do you have any idea why it says that the URL's are being blocked by robots.txt when that looks how it should?
I've read a couple of places that it can be because of a Wordpress Plugin that is creating a virtuel robots.txt, but I can't validate it.. 1. I have set WP-Privacy to crawl my site
2. I have deactivated all WP-plugins and I still get same GWT-Warnings. Looking forward to hear if you have an idea that might work!0 -
Once duplicate content found, worth changing page or forget it?
Hi, the SEOmoz crawler has found over 1000 duplicate pages on my site. The majority are based on location and unfortunately I didn't have time to add much location based info. My question is, if Google has already discovered these, determined they are duplicate, and chosen the main ones to show on the SERPS, is it worth me updating all of them with localalized information so Google accepts the changes and maybe considers them different pages? Or do you think they'll always be considered duplicates now?
Technical SEO | | SpecialCase0 -
H1 problem on my site not sure how to solve it
Hi i have just done an on grade report for my site www.in2town.co.uk and i found that i had a number of h1 which was not doing my seo any good. I have sorted most of the h1 problems out but the report is still showing i have two h1 but i cannot find them, i have found one which i have done which is a short description of the site under the main banner page but i cannot find the second h1 can anyone please let me know if their is a simple way of finding the other h1 so i can deal with it many thanks
Technical SEO | | ClaireH-1848860