Sitemap issue? 404's & 500's are regenerating?
-
I am using the WordPress SEO plugin by Yoast to generate a sitemap on http://www.atozqualityfencing.com. Last month, I had an associate create redirects for over 200 404 errors. She did this via the .htaccess file. Today, there are the same amount of 404s along with a number of 503 errors. This new Wordpress website was constructed on a subdirectory and made live by simply entering some code into the .htaccess file in order to direct browsers to the content we wanted live. In other words, the content actually resides in a subdirectory titled "newsite" but is shown live on the main url.
Can you tell me why we are having these 404 & 503 errors? I have no idea where to begin looking.
-
You likely have a .htaccess issue causing a rewrite error. You may want to examine or replace your .htaccess with a default. Also, I've seen some plugins cause this error.
What is happening is this:
http://www.atozqualityfencing.com/newsite
is sent to:
http://www.atozqualityfencing.com/newsite/
Note the trailing slash.
But that page is returning a 404 error.
If I go to
http://www.atozqualityfencing.com/newsite/index.php it redirects to
http://www.atozqualityfencing.com/newsite/
So there is likely something wrong in the redirect rules. I would try disabling all plugins. If that fails, compare the current htaccess to a default one and remove any modifications.
.
-
Wondering if anyone else out there has some insight as to whether the information in my previous post seems to be correct.
-
Oye, Jeff - this is a little bit over my head so bear with me as I work it through.
I went to redbot.org and entered the url of where the main website is actually living (http://www.atozqualityfencing.com/newsite). I received this information:
HTTP/1.1 301 Moved Permanently Date: Sun, 24 Aug 2014 14:56:10 GMT Server: Apache Location: [http://www.atozqualityfencing.com/newsite/](https://redbot.org/?uri=http://www.atozqualityfencing.com/newsite/&req_hdr=Referer%3Ahttp://www.atozqualityfencing.com/newsite) Cache-Control: max-age=3600 Expires: Sun, 24 Aug 2014 15:56:10 GMT Content-Length: 326 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 When I clicked on the url listed under Location above, I receive the following information:
HTTP/1.1 404 Not Found Date: Sun, 24 Aug 2014 14:59:59 GMT Server: Apache X-Pingback: http://www.atozqualityfencing.com/newsite/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Vary: Accept-Encoding,User-Agent Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8
This has me confused and I wondering if the method used for making the revised website is either not good or is missing something. Here are the articles that were followed for "moving" the newsite redesign to the live url. ``` [http://codex.wordpress.org/Giving_WordPress_Its_Own_Directory](http://codex.wordpress.org/Giving_WordPress_Its_Own_Directory) [http://codex.wordpress.org/Moving_WordPress#When_Your_Domain_Name_or_URLs_Change](http://codex.wordpress.org/Moving_WordPress#When_Your_Domain_Name_or_URLs_Change) ``` Can you provide any further assistance? Thanks, Janet ```
-
A 503 error is a service unavailable error. I have seen situations where redirects are incorrect and loop. Depending on the hosting setup, this can trigger various HTTP error codes.
The best way to debug this is by looking at your Apache access logs. Scan your logs for the 503 errors. Pay attention to the URL being requested as well as the referring URL.
Very likely, there's some looping process and in cases when Apache runs on FastCGI, you can get a 503 error due to too many processes being triggered.
Also, due to how WP handles 404's, I've seen many plugins mask underlying causes. So if you have any plugins that impact error handling, you may need to remove those while debugging.
You can also use http://www.redbot.org/ to check the headers for any page that should be redirected. That tool should return a Location header with a URL. Visit that Location URL in your browser and make sure it resolves.
The goal here is to try to replicate the behavior. Once you can replicate the behavior, dig into your redirect/rewrite rules and examine the logic to determine why you are seeing the loops or failures.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console 'Change of Address' Just 301s on source domain?
Hi all. New here, so please be gentle. 🙂 I've developed a new site, where my client also wanted to rebrand from .co.nz to .nz On the source (co.nz) domain, I've setup a load of 301 redirects to the relevant new page on the new domain (the URL structure is changing as well).
Technical SEO | | WebGuyNZ
E.G. On the old domain: https://www.mysite.co.nz/myonlinestore/t-shirt.html
In the HTACCESS on the old/source domain, I've setup 301's (using RewriteRule).
So that when **https://www.mysite.co.nz/**myonlinestore/t-shirt.html is accessed, it does a 301 to;
https://mysite.nz/shop/clothes/t-shirt All these 301's are working fine. I've checked in dev tools and a 301 is being returned. My question is, is having the 301's just on the source domain only enough, in regards to starting a 'Change of Address' in Google's Search Console? Their wording indicates it's enough but I'm concerned, maybe I also need redirects on the target domain as well? I.E. Does the Search Console Change of Address process work this way?
It looks at the source domain URL (that's already in Google's index), sees the 301 then updates the index (and hopefully pass the link juice) to the new URL. Also, I've setup both source and target Search Console properties as Domain Properties. Does that mean I no longer need to specify that the source and target properties are HTTP or HTTPS? I couldn't see that option when I created the properties. Thanks!0 -
Sub domain? Micro site? What's the best solution?
My client currently has two websites to promote their art galleries in different parts of the country. They have bought a new domain (let's call it buyart.com) which they would eventually like to use as an e-commerce platform. They are wondering whether they keep their existing two gallery websites (non e-commerce) separate as they always have been, or somehow combine these into the new domain and have one overarching brand (buyart.com). I've read a bit on subdomains and microsites but am unsure at this stage what the best option would be, and what the pros and cons are. My feeling is to bring it all together under buyart.com so everything is in one place and creates a better user journey for anyone who would like to visit. Thoughts?
Technical SEO | | WhitewallGlasgow0 -
Why is robots.txt blocking URL's in sitemap?
Hi Folks, Any ideas why Google Webmaster Tools is indicating that my robots.txt is blocking URL's linked in my sitemap.xml, when in fact it isn't? I have checked the current robots.txt declarations and they are fine and I've also tested it in the 'robots.txt Tester' tool, which indicates for the URL's it's suggesting are blocked in the sitemap, in fact work fine. Is this a temporary issue that will be resolved over a few days or should I be concerned. I have recently removed the declaration from the robots.txt that would have been blocking them and then uploaded a new updated sitemap.xml. I'm assuming this issue is due to some sort of crossover. Thanks Gaz
Technical SEO | | PurpleGriffon0 -
What to do about removing pages for the 'offseason' (IE the same URL will be brought back in 6-7 months)?
I manage a site for an event that runs annually, and now that the event has concluded we would like to remove some of the pages (schedule, event info, TV schedule, etc.) that won't be relevant again until next year's event. That said, if we simply remove those pages from the web, I'm afraid that we'll lose out on valuable backlinks that already exist, and when those pages return they will have the same URLs as before. Is there a best course of action here? Should I redirect the removed pages to the homepage for the time being using a 302? Is there any risk there if the 'temporary' period is ~7 months? Thanks in advance.
Technical SEO | | KTY550 -
Medium sizes forum with 1000's of thin content gallery pages. Disallow or noindex?
I have a forum at http://www.onedirection.net/forums/ which contains a gallery with 1000's of very thin-content pages. We've currently got these photo pages disallowed from the main googlebot via robots.txt, but we do all the Google images crawler access. Now I've been reading that we shouldn't really use disallow, and instead should add a noindex tag on the page itself. It's a little awkward to edit the source of the gallery pages (and keeping any amends the next time the forum software gets updated). Whats the best way of handling this? Chris.
Technical SEO | | PixelKicks0 -
The word 'shop' in a page title
I'm reworking most of the page titles on our site and I'm considering the use of the word 'Shop' before a product category. ex. Shop 'keyword' | Brand Name As opposed to just using the keyword sans 'Shop.' Some of the keywords are very generic, especially for a top level category page. Question: Is the word 'Shop' damaging my SEO efforts in any way?
Technical SEO | | rhoadesjohn0 -
Canonical Tag on Blog - Roger says it's incorrect?
Hi I have just released a post on my blog and I wanted to check my primary keyword for the post to make sure the page scores well. However when I did the page report it showed the Canonical Rel tag was incorrect. example of link the blog is http://www.example.com/Blog/post-comment/ The Canonical tag is below What am I doing wrong, as it looks correct to me?
Technical SEO | | Cocoonfxmedia0 -
Would duplicate listings effect a client's ranking if they used same address?
Lots of duplication on directory listings using similar or same address, just different company names... like so-and-so carpet cleaning; and another listing with so-and-so janitorial services. Now my client went from a rank around 3 - 4 to not even in the top 50 within a week. -- -- -- Would duplication cause this sudden drop? Not a lot of competition for a client using keyword (janitorial services nh); -- -- -- would a competitor that recently optimized a site cause this sudden drop? Client does need to optimize for this keyword, and they do need to clean up this duplication. (Unfortunately this drop happened first of March -- I provided the audit, recommendations/implementation and still awaiting the thumbs up to continue with implementation). --- --- --- Did Google make a change and possibly find these discrepancies within listings and suddenly drop this client's ranking? And they there's Google Places:
Technical SEO | | CeCeBar
Client usually ranks #1 for Google Places with up to 12 excellent reviews, so they are still getting a good spot on the first page. The very odd thing though is that Google is still saying that need to re-verify their Google places. I really would like to know for my how this knowledge how a Google Places account could still need verification and yet still rank so well within Google places on page results? because of great reviews? --- Any ideas here, too? _Cindy0