Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why is Google Webmaster Tools showing 404 Page Not Found Errors for web pages that don't have anything to do with my site?
-
I am currently working on a small site with approx 50 web pages. In the crawl error section in WMT Google has highlighted over 10,000 page not found errors for pages that have nothing to do with my site. Anyone come across this before?
-
These extensions look like they are attachments. Go into GWT, click on the 404 link, then a box with pop up, click on the "Linked From" tab.. Go to the page and Ctrl U to see the source code. Do a CTRL F and search for the broken link. When you find it in your source code you should be able to figure out what's triggering that response. If you can't find the URLs in your source code, mark them as fixed and it should take care of the problem. Especially if they are older. It looks like it could be a shipment status, a product out of stock message, and a PDF of train schedules.
I would check the linked from pages and make sure that there isn't some erroneous code that is creating a page when it doesn't need to.
-
I think you could have is the following problem:
You got a Domain wich earned Links bevore and these Links are still out there. Google can see them and you will again and again see them in WMT.
You cant just say solved and than they are gone, caused by the backlinks. They come again.
Check that in WMT - I dont know how it is called in english versions (I just see the german), you can click on the 404 and than take a look at "what is linking to that page"
An easy solution in that case may be to disallow "/fs" for bots. (if you dont use fs in your url structure) -
The 404s start with the correct url for the site but are then suffixed after the forward slash with urls such as;
fs/201410/a_Royal_Mail_Item_Is_Currently_Being_Processed_For_Delivery_.html
/fs/201410/a_When_will_John_lewis_restock_the_Dr_Dre_beats_solo_hd_.html
fs/201410/a_Food_stalls_at_train_stations_in_London_.html
There are 10,000+ of these in my WMT account. I have never seen this before - any ideas?
-
Need more data to help, eh!
-
What do you mean "pages that have nothing to do with my site" are these not on your domain or are they on your domain but you are not familiar with them?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I want to move some pages of my website to a folder and nav menu in those pages should only show inner page links, will it hurt SEO?
Hi, My website has a few SaaS products, to make my website simple i want to move my website some pages to its specific folder structure , so eg website.com/product1/features
Technical SEO | | webbeemoz
website.com/product1/pricing
website.com/product1/information and same for product2 and so on, the website.com/product1/.. menu will only show the links of product1 and only one link to homepage (possibly in footer). Please share your opinion will it be a good idea, from UI perspective it will be simple , but i am not sure about SEO perspective, please help thanks1 -
Site Audit Tools Not Picking Up Content Nor Does Google Cache
Hi Guys, Got a site I am working with on the Wix platform. However site audit tools such as Screaming Frog, Ryte and even Moz's onpage crawler show the pages having no content, despite them having 200 words+. Fetching the site as Google clearly shows the rendered page with content, however when I look at the Google cached pages, they also show just blank pages. I have had issues with nofollow, noindex on here, but it shows the meta tags correct, just 0 content. What would you look to diagnose? I am guessing some rogue JS but why wasn't this picked up on the "fetch as Google".
Technical SEO | | nezona0 -
Robots.txt & meta noindex--site still shows up on Google Search
I have set up my robots.txt like this: User-agent: *
Technical SEO | | RoxBrock
Disallow: / and I have this meta tag in my on a Wordpress site, set up with SEO Yoast name="robots" content="noindex,follow"/> I did "Fetch as Google" on my Google Search Console My website is still showing up in the search results and it says this: "A description for this result is not available because of this site's robots.txt" This site has not shown up for years and now it is ranking above my site that I want to rank for this keyword. How do I get Google to ignore this site? This seems really weird and I'm confused how a site with little content, that has not been updated for years can rank higher than a site that is constantly updated and improved.1 -
Will deleting Wordpress tags result in 404 errors or anything?
I want to clean up my tags and I'm worried I'm going to look in my webmasters the next day with hundreds of errors. Whats the best way of doing this?
Technical SEO | | howlusa0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
Google's "cache:" operator is returning a 404 error.
I'm doing the "cache:" operator on one of my sites and Google is returning a 404 error. I've swapped out the domain with another and it works fine. Has anyone seen this before? I'm wondering if G is crawling the site now? Thx!
Technical SEO | | AZWebWorks0 -
Is there a great tool for URL mapping old to new web site?
We are implementing new design and removing some pages and adding new content. Task is to correctly map and redirect old pages that no longer exist.
Technical SEO | | KnutDSvendsen0 -
Do or don't —forward a parked domain to a live website?
Hi all, I'm new to SEO and excited to see the launch of this forum. I've searched for an answer to this question but haven't been able to find out. I "attended" two webinars recently regarding SEO. The above subject was raised in each one and the speakers gave a polar opposite recommendations. So I'm completely at a loss as to what to do with some domains that are related to a domain used on a live website that I'm working to improve the SEO on. The scenario: Live website at (fictitious) www.digital-slr-camera-company.com. I also have 2 related domain names which are parked with the registrar: www.dslr.com, www.digitalslr.com. The question: Is there any SEO benefit to be gained by pointing the two parked domains to the website at www.digitalcamercompany.com? If so, what method of "pointing" should be used? Thanks to any and all input.
Technical SEO | | Technical_Contact0