Weird 404 errors in Webmaster Tools
-
Hi,
In a regular check with Webmaster Tools, I have noticed some weird 404 errors, for example, my domain URL is something like http://domainname.com/, the 404 error points to some weird URLs like http://domainname.com/james-bond&page=2/ and http://domainname.com/juegos-de&page=3/, at first I have tried to block them by robots.txt, but now I am getting these kind of 404 errors a lot, and don't think blocking them all is a perfect solution.
Can anyone help me out with the issue?
Thank you in advance.
cheers. -
Hi nishthaj,
I think blocking with robots.txt is a good solution. But you don't have to block each url manually, it's better to block all urls that have: & or =
I think the code for this is:
Disallow: /*&
Disallow: /*=
This way you will automatically block all present and future weird urls like this and prevent present and futures problems.
Hope that helped
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Xml sitemaps giving 404 errors
We have recently made updates to our xml sitemap and have split them into child sitemaps. Once these were submitted to search console, we received notification that the all of the child sitemaps except 1 produced 404 errors. However, when we view the xml sitemaps in a browser, there are no errors. I have also attempted crawling the child sitemaps with Screaming Frog and received 404 responses there as well. My developer cannot figure out what is causing the errors and I'm hoping someone here can assist. Here is one of the child sitemaps: http://www.sermonspice.com/sitemap-countdowns_paged_1.xml
Technical SEO | | ang0 -
Schema Markup Errors - Priority or Not?
Greetings All... I've been digging through the search console on a few of my sites and I've been noticing quite a few structured data errors. Most of the errors are related to: hcard, hentry and hatom. Most of them are missing author & entry-title, while the other one is missing: fn. I recently saw an article on SEL about Google's focus on spammy mark-up. The sites I use are built and managed by vendors, so I would have to impress upon them the impact of these errors and have them prioritize, then fix them. My question is whether or not this should be prioritized? Should I have them correct these errors sooner than later or can I take a phased approach? I haven't noticed any loss in traffic or anything like that, I'm more focused on what negative impact a "phased approach" could have. Any thoughts?
Technical SEO | | AfroSEO0 -
Pages appear fine in browser but 404 error when crawled?
I am working on an eCommerce website that has been written in WordPress with the shop pages in E commerce Plus PHP v6.2.7. All the shop product pages appear to work fine in a browser but 404 errors are returned when the pages are crawled. WMT also returns a 404 error when ‘fetch as Google’ is used. Here is a typical page: http://www.flyingjacket.com/proddetail.php?prod=Hepburn-Jacket Why is this page returning a 404 error when crawled? Please help?
Technical SEO | | Web-Incite0 -
Bogus Crawl Errors in Webmaster Tools?
I am suddenly seeing a ton of crawl errors in webmaster tools. Almost all of them are URL links coming from scraper sites.that I do not own. Do you see these in your Webmaster Tools account? Do you mark them as "fixed" if they are on a scraper site? There are waaaay too many of these to make redirects. Thanks!
Technical SEO | | EGOL0 -
A few misc Webmaster tools questions & Robots.txt etc
Hi I have a few general misc questions re Robots.tx & GWT: 1) In the Robots.txt file what do the below lines block, internal search ? Disallow: /?
Technical SEO | | Dan-Lawrence
Disallow: /*? 2) Also the sites feeds are blocked in robots.txt, why would you want to block a sites feeds ? **3) **What's the best way to deal with the below: - old removed page thats returning a 500 response code ? - a soft 404 for an old removed page that has no current replacement old removed pages returning a 404 The old pages didn't have any authority or inbound links hence is it best/ok to simply create a url removal request in GWT ? Cheers Dan0 -
403 error
Hey guys, I know that a 403 is not a terrible thing, but is it worth while fixing? If so what is the best way to approach it. Cheers
Technical SEO | | Adamshowbiz0 -
How to properly remove 404 errors
Hi, According to seomoz report I have two 404 errors on my site. (http://screencast.com/t/2FG8fA1dvGB) I removed them from google webmasters central about 2 weeks ago (http://screencast.com/t/MQ8XBvrFm ) , but they're still showing as an error in the next report (weekly update). Is there anything else you do about 404 or just remove urls through gwc? Or maybe seomoz data is delayed? Thanks in advance, JJ
Technical SEO | | jjtech0 -
How to fix duplicate page content error?
SEOmoz's Crawl Diagnostics is complaining about a duplicate page error. The example of links that has duplicate page content error are http://www.equipnet.com/misc-spare-motors-and-pumps_listid_348855 http://www.equipnet.com/misc-spare-motors-and-pumps_listid_348852 These are not duplicate pages. There are some values that are different on both pages like listing # , equipnet tag # , price. I am not sure how do highlight the different things the two page has like the "Equipment Tag # and listing #". Do they resolve if i use some style attribute to highlight such values on page? Please help me with this as i am not really sure why seo is thinking that both pages have same content. Thanks !!!
Technical SEO | | RGEQUIPNET0