Sitemap issue? 404's & 500's are regenerating?
-
I am using the WordPress SEO plugin by Yoast to generate a sitemap on http://www.atozqualityfencing.com. Last month, I had an associate create redirects for over 200 404 errors. She did this via the .htaccess file. Today, there are the same amount of 404s along with a number of 503 errors. This new Wordpress website was constructed on a subdirectory and made live by simply entering some code into the .htaccess file in order to direct browsers to the content we wanted live. In other words, the content actually resides in a subdirectory titled "newsite" but is shown live on the main url.
Can you tell me why we are having these 404 & 503 errors? I have no idea where to begin looking.
-
You likely have a .htaccess issue causing a rewrite error. You may want to examine or replace your .htaccess with a default. Also, I've seen some plugins cause this error.
What is happening is this:
http://www.atozqualityfencing.com/newsite
is sent to:
http://www.atozqualityfencing.com/newsite/
Note the trailing slash.
But that page is returning a 404 error.
If I go to
http://www.atozqualityfencing.com/newsite/index.php it redirects to
http://www.atozqualityfencing.com/newsite/
So there is likely something wrong in the redirect rules. I would try disabling all plugins. If that fails, compare the current htaccess to a default one and remove any modifications.
.
-
Wondering if anyone else out there has some insight as to whether the information in my previous post seems to be correct.
-
Oye, Jeff - this is a little bit over my head so bear with me as I work it through.
I went to redbot.org and entered the url of where the main website is actually living (http://www.atozqualityfencing.com/newsite). I received this information:
HTTP/1.1 301 Moved Permanently Date: Sun, 24 Aug 2014 14:56:10 GMT Server: Apache Location: [http://www.atozqualityfencing.com/newsite/](https://redbot.org/?uri=http://www.atozqualityfencing.com/newsite/&req_hdr=Referer%3Ahttp://www.atozqualityfencing.com/newsite) Cache-Control: max-age=3600 Expires: Sun, 24 Aug 2014 15:56:10 GMT Content-Length: 326 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 When I clicked on the url listed under Location above, I receive the following information:
HTTP/1.1 404 Not Found Date: Sun, 24 Aug 2014 14:59:59 GMT Server: Apache X-Pingback: http://www.atozqualityfencing.com/newsite/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Vary: Accept-Encoding,User-Agent Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8
This has me confused and I wondering if the method used for making the revised website is either not good or is missing something. Here are the articles that were followed for "moving" the newsite redesign to the live url. ``` [http://codex.wordpress.org/Giving_WordPress_Its_Own_Directory](http://codex.wordpress.org/Giving_WordPress_Its_Own_Directory) [http://codex.wordpress.org/Moving_WordPress#When_Your_Domain_Name_or_URLs_Change](http://codex.wordpress.org/Moving_WordPress#When_Your_Domain_Name_or_URLs_Change) ``` Can you provide any further assistance? Thanks, Janet ```
-
A 503 error is a service unavailable error. I have seen situations where redirects are incorrect and loop. Depending on the hosting setup, this can trigger various HTTP error codes.
The best way to debug this is by looking at your Apache access logs. Scan your logs for the 503 errors. Pay attention to the URL being requested as well as the referring URL.
Very likely, there's some looping process and in cases when Apache runs on FastCGI, you can get a 503 error due to too many processes being triggered.
Also, due to how WP handles 404's, I've seen many plugins mask underlying causes. So if you have any plugins that impact error handling, you may need to remove those while debugging.
You can also use http://www.redbot.org/ to check the headers for any page that should be redirected. That tool should return a Location header with a URL. Visit that Location URL in your browser and make sure it resolves.
The goal here is to try to replicate the behavior. Once you can replicate the behavior, dig into your redirect/rewrite rules and examine the logic to determine why you are seeing the loops or failures.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Despite proper hreflang and lang attribute implementation using xml sitemaps, I'm seeing sitelinks from different countries. Any help please?
When someone searches for our brand in US, instead of only US links, users are served with canadian or iranian sitelinks. Despite we have properly implemented xml sitemaps with hreflangs, even we have implemented lang attribute in the head section of source code for every country. I'd be thankful for any advice.
Technical SEO | | eset0 -
Drupal's Yoast
Hi. I'm wondering if anyone knows of an equivalent to Yoast for Drupal sites? Is there such a thing? I've been asked whether I could optimize a Drupal site and am wondering if the guiding principles and techniques I use for HTML and Wordpress sites can be easily transferred to a Drupal implementation, or whether I might be setting myself (and the client!) up for failure. Any observations or advice would be appreciated.
Technical SEO | | DonnaDuncan0 -
Sitemap links
Hi, I´m running a sitemap using pro-sitemaps and I find several pages that shouldn´t be listed. How do I find how are these pages being generated? Can´t find the links the robot is following to get to those pages..
Technical SEO | | ceci27100 -
Medium sizes forum with 1000's of thin content gallery pages. Disallow or noindex?
I have a forum at http://www.onedirection.net/forums/ which contains a gallery with 1000's of very thin-content pages. We've currently got these photo pages disallowed from the main googlebot via robots.txt, but we do all the Google images crawler access. Now I've been reading that we shouldn't really use disallow, and instead should add a noindex tag on the page itself. It's a little awkward to edit the source of the gallery pages (and keeping any amends the next time the forum software gets updated). Whats the best way of handling this? Chris.
Technical SEO | | PixelKicks0 -
Why am I not showing up in the SERP's or Google Local?
I have been trying to optimise the following site for both Google SERP's and Google Local - Pixel Primate The URL has been around for around 3 years now but they just updated the website and launched it in December 2012. I did the on-page optimisation early in January 2013 and Google seems to have indexed the changes, for the home page at least. One major keyword I am targeting for the home page is 'Web Design Leicester'. I understand that the DA is fairly low (24) so this is something I need to improve. However, I've experienced positive results fairly quickly from just on-page optimisation for other sites I have worked on. The site just doesn't seem to be ranking at all for any keywords. Maybe the industry type is just extremely competitve but I find it very strange to not be visible anywhere in the SERPs. The site does not seem to have any penalties as it ranks for 'Pixel Primate' and all pages appear when doing a site: search. Also what's strange is that I set up the Google Local listing years ago but it doesn't appear anywhere in the local listing, not even when I search for it manually. Any suggestions would be appreciated.
Technical SEO | | CWseo0 -
Magento Robots & overly dynamic URL-s
How can i block all URL-s on a Magento store that have 2 or more dynamic parameters in it, since all the parameters have attribute name in it and not some uniform ID Would something like: Disallow: /?&* work? Since the only thing that is constant throughout all the custom parameters is that they are separated with "&" Thanks 🙂
Technical SEO | | tilenkrivec0 -
Website's stability and it's affect on SEO
What is the best way to combat previous website stability issues? We had page load time and site stability problems over the course of several months. As a result our keyword rankings plummeted. Now that the issues have been resolved, what's the best/quickest way to regain our rankings on specific keywords? Thanks, Eric
Technical SEO | | MediaCause0 -
Webmaster tools lists a large number (hundreds)of different domains linking to my website, but only a few are reported on SEOMoz. Please explain what's going on?
Google's webmaster tools lists hundreds of links to my site, but SEOMoz only reports a few of them. I don't understand why that would be. Can anybody explain it to me? Is there someplace to I can go to alert SEOMoz to this issue?
Technical SEO | | dnfealkoff0