Why am I getting 400 client errors on pages that work?
-
Hi,
I just done the initial crawl on y domain and I sem to have 80 400 client errors. However when I visit the urls the pages are loading fine. Any ideas on why this is happening and how I can resolve the problem?
-
Hi Robert,
Thanks for you response. I've resolved the issue now. I started by doing a crawl test which gave me the same number of 400 urls but revealed that they contained "../../../" in them which seems to be something annoying the cms I'm using creates (these weren't being displayed like this in the campaign error view). I've corrected these urls and done a crawl and I seem to be clear of 400s.
-
Rob is correct, there is little we can do to help without a url
-
Moesian
Without an example url, it is hard to properly diagnose the problem. One thing I had to get past when I was first using SEOmoz, was that often the errors were not really on urls that would be on the sitemap per se. With CMS like WP or Joomla, sometimes we will get errors around the plugins, themes, widgets, and apps. An example of one is:
ExampleSite.info/wp-content/themes/EarthlyTouch/js/idtabs.js
So, make sure you don't just look at the start of the url, but cut and paste it from the error report to be sure. If you know it is a page that is resolving correctly, I would suggest putting in a ticket for Support (click on help at top right of this page) or email help@SEOmoz.org.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Export to csv does not work
HI friends. For long time I cannot download csv reports for links. I need to export new spam links sort by spam score and compare to existing but export to csv button spins for ever and nothing happens
Moz Pro | | netcomsia0 -
I have a duplicate content on my Moz crawler, but google hasn't indexed those pages: do I still need to get rid of the tags?
I received an urgent error from the Moz crawler that I have duplicate content on my site due to the tags I have. For example: http://www.1forjustice.com/graves-amendment/ The real article found here: http://www.1forjustice.com/car-accident-rental-car/ I didn't think this was a big deal, because when I looked at my GWT these pages weren't indexed (picture attached). Question: should I bother fixing this from an SEO perspective? If Google isn't indexing the pages, then am I losing link juice? 6c2kxiZ
Moz Pro | | Perenich0 -
Moz crawl only shows 2 pages, but we have more than 1000 pages.
Hi Guys Is there anyway we can test Moz crawler ? it showing only 2 pages crawls. We are running website on HTTPS ? Is HTTPS is issues for Moz ?
Moz Pro | | dotlineseo0 -
Duplicate content error?
I am seeing an error for duplicate content for the following pages: http://www.bluelinkerp.com/contact/ http://www.bluelinkerp.com/contact/index.asp Doesn't the first URL just automatically redirect to the default page in that directory (index.asp)? Why is it showing up as separate duplicate pages?
Moz Pro | | BlueLinkERP0 -
Seomoz crawling filtered pages
Hi, I just checked an seo campaign we started last week, so I opened seomoz to see the crawl diagnostics. Lot's of duplicate content & duplicate titles showing up, but that's because Rogerbot is crawling all of the filtered pages as well. How do I exclude these pages from being crawled? /product/brand-x/3969?order=brand&sortorder=ASC
Moz Pro | | nvs.nim
/product/brand-x/3969?order=popular&sortorder=ASC
/product/brand-x/3969?order=popular&sortorder=DESC&page=10
/product/brand-x/3969?order=popular&sortorder=DESC&page=110 -
20000 site errors and 10000 pages crawled.
I have recently built an e-commerce website for the company I work at. Its built on opencart. Say for example we have a chair for sale. The url will be: www.domain.com/best-offers/cool-chair Thats fine, seomoz is crawling them all fine and reporting any errors under them url great. On each product listing we have several options and zoom options (allows the user to zoom in to the image to get a more detailed look). When a different zoom type is selected it adds on to the url, so for example: www.domain.com/best-offers/cool-chair?zoom=1 and there are 3 different zoom types. So effectively its taking for urls as different when in fact they are all one url. and Seomoz has interpreted it this way, and crawled 10000 pages(it thinks exist because of this) and thrown up 20000 errors. Does anyone have any idea how to solve this?
Moz Pro | | CompleteOffice0 -
How Does On Page Analysis work
Hi guys, I just need to run something past you. when I look at my on page analysis I have 5 key terms I am focusing on. For instance one of them is "computer backup". According to the report the current grade is 'F' when looking at site page "/" which I assume is the home page.
Moz Pro | | cubetech
When I do a lookup on other pages of the site it gets a ranking of A. Which is good. But since the hompage ranking went from A to F my rankings have definitely been affect. So i guess my questions are: does "/" mean the hompage, or all pages overall. What should I really be looking at here. I am assuming that you select certain pages to target certain key words. Should i be focusing like this, or more to the "/". Thanks Guys hoping to clear this one up.0 -
Domain vs. page authority?
hey i've been told that page auth is more important than domain authrity on open site.. why is that?
Moz Pro | | daxvirgo0