Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Some URLs were not accessible to Googlebot due to an HTTP status error.
-
Hello I'm a seo newbie and some help from the community here would be greatly appreciated.
I have submitted the sitemap of my website in google webmasters tools and now I got this warning:
"When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted."
How do I fix this? What should I do?
Many thanks in advance.
-
You need to confirm that the URLs are in fact 100% of your URLs going into the site map are accessible.
if it's a big issue in a big site send me the URL in a private message I will use deep crawl to create a XML sitemap for you. The screaming frog tool is excellent as well though does performance well with extremely large sites.
check your robots.txt file this so great tool if in case you have more than one (it happens)
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
or
http://tools.seochat.com/tools/robots-txt-validator/
so many great free tools are found right here http://tools.seochat.com/tools/
It could be a number of things although it could be Google being finicky. Run the site through Moz crawler, use feedthebot.com using "tools SEO" or download the free version of http://www.screamingfrog.co.uk/seo-spider/ this will tell you if there is an issue. If your site is static you can even create an alternate site map with screaming frog if your site is large use deep crawl or Moz analytics
be certain there are no sitemaps redirecting to each other so no redirects going from the old site map to the new site map. Make certain that the site map is in an XML format e.g. http://example.com/sitemap.xml or if in a different format like https://example.com/sitemap_index.xml make sure the proper format That resolves when you look at the site map is what is going into Webmaster tools. Be certain the site map does not contain over 500 URLs per the site map so example.com/sitemap1.xml and so on keep numbering them appropriately. sometimes Google is overloaded and does not seem to like to play well with certain site maps or the site map is maybe not generating very well on the server and that is fixed later on. If this is a long-term problem speak to your host or developer. My recommendation is if you've done everything I have talked about that you attempt to submit is the sitemap to to Webmaster tools or simply build a new sitemap and submit that.
so if worse comes to worse take the screaming frog and use this URL to send it to Google
http://www.google.com/submityourcontent/business-owner/
I hope that helps,
Thomas
-
Hi, It looks like you have url's placed in your sitemap that have an HTTP status error. You can search for the urls and remove them from your sitemap or make sure they have the right status. Does it say which status error? And does it say which url's? Did you check those url's?When you use Screaming frog spider tool (free), you can search for status error's this is an easy way to find these url's.
Grtz, Leonie
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Folders in url structure?
Hello, Revamping an out-of-date website and am wondering if I need to include the folders (categories) in the url structure? The proposed structure has 8 main folders. I've been reading that Google is ok if the folder is not included in the url, but is it really? The hesitation I have is that the urls are getting long and the main folder only has only a sub folder beneath it. So, /folder-name/facility-name/treatment-overview. This looks too long, doesn't it? Thanks!
Technical SEO | | lfrazer1230 -
404 errors
Hi I am getting these show up in WMT crawl error any help would be very much appreciated | ?escaped_fragment=Meditation-find-peace-within/csso/55991bd90cf2efdf74ec3f60 | 404 | 12/5/15 |
Technical SEO | | ReSEOlve
| | 2 | mobile/?escaped_fragment= | 404 | 10/26/15 |
| | 3 | ?escaped_fragment=Tips-for-a-balanced-lifestyle/csso/1 | 404 | 12/1/15 |
| | 4 | ?escaped_fragment=My-favorite-yoga-spot/csso/5598e2130cf2585ebcde3b9a | 404 | 12/1/15 |
| | 5 | ?escaped_fragment=blog/c19s6 | 404 | 11/29/15 |
| | 6 | ?escaped_fragment=blog/c19s6/Tag/yoga | 404 | 11/30/15 |
| | 7 | ?escaped_fragment=Inhale-exhale-and-once-again/csso/2 | 404 | 11/27/15 |
| | 8 | ?escaped_fragment=classes/covl | 404 | 10/29/15 |
| | 9 | m/?escaped_fragment= | 404 | 10/26/15 |
| | 10 | ?escaped_fragment=blog/c19s6/Page/1 | 404 | 11/30/15 | | |0 -
Url folder structure
I work for a travel site and we have pages for properties in destinations and am trying to decide how best to organize the URLs basically we have our main domain, resort pages and we'll also have articles about each resort so the URL structure will actually get longer:
Technical SEO | | Vacatia_SEO
A. domain.com/main-keyword/state/city-region/resort-name
_ domain.com/family-condo-for-rent/orlando-florida/liki-tiki-village_ _ domain.com/main-keyword-in-state-city/resort-name-feature _
_ domain.com/family-condo-for-rent/orlando-florida/liki-tiki-village/kid-friend-pool_ B. Another way to structure would be to remove the location and keyword folders and combine. Note that some of the resort names are long and spaces are being replaced dynamically with dashes.
ex. domain.com/main-keyword-in-state-city/resort-name
_ domain.com/family-condo-for-rent-in-orlando-florida/liki-tiki-village_ _ domain.com/main-keyword-in-state-city/resort-name-feature_
_ domain.com/family-condo-for-rent-in-orlando-florida/liki-tiki-village-kid-friend-pool_ Question: is that too many folders or should i combine or break up? What would you do with this? Trying to avoid too many dashes.0 -
URL - Well Formed or Malformed
Hi Mozzers, I've been mulling over whether my URLs could benefit a little SEO tweaking. I'd be grateful for your opinion. For instance, we've a product, a vintage (second hand), red Chanel bag. At the moment the URL is: www.vintageheirloom.com/vintage-chanel-bags/2.55-bags/red-2.55-classic-double-flap-bag-1362483150 Broken down... vintage-chanel-bags = this is the main product category, i.e. vintage chanel bags 2.55-bags = is a sub category of the main category above. They are vintage Chanel 2.55 bags, but I've not included 'vintage' again. 2.55 bags are a type of Chanel bag. red-2.55-classic-double-flap-bag = this is the product, the bag **1362483150 **= this is a unique id, to prevent the possibility of duplicate URLs As you no doubt can see we target, in particular, the phrase **vintage. **The actual bag / product title is: Vintage Chanel Red 2.55 classic double flap bag 10” / 25cm With this in mind, would I be better off trying to match the product name with the end of the URL as closely as possible? So a close match below would involve not repeating 'chanel' again: www.vintageheirloom.com/chanel-bags/2.55-bags/vintage-red-2.55-classic-double-flap-bag or an exact match below would involve repeating 'chanel': www.vintageheirloom.com/chanel-bags/2.55-bags/vintage-chanel-red-2.55-classic-double-flap-bag This may open up more flexibility to experiment with product terms like second hand, preowned etc. Maybe this is a bad idea as I'm removing the phrase 'vintage' from the main category. But this logical extension of this looks like keyword stuffing !! www.vintageheirloom.com/vintage-chanel-bags/vintage-2.55-bags/vintage-chanel-red-2.55-classic-double-flap-bag Maybe this is over analyzing, but I doubt it? Thanks for looking. Kevin
Technical SEO | | well-its-1-louder0 -
403 forbidden error website
Hi Mozzers, I got a question about new website from a new costumer http://www.eindexamensite.nl/. There is a 403 forbidden error on it, and I can't find what the problem is. I have checked on: http://gsitecrawler.com/tools/Server-Status.aspx
Technical SEO | | MaartenvandenBos
result:
URL=http://www.eindexamensite.nl/ **Result code: 403 (Forbidden / Forbidden)** When I delete the .htaccess from the server there is a 200 OK :-). So it is in the .htaccess. .htaccess code: ErrorDocument 404 /error.html RewriteEngine On
RewriteRule ^home$ / [L]
RewriteRule ^typo3$ - [L]
RewriteRule ^typo3/.$ - [L]
RewriteRule ^uploads/.$ - [L]
RewriteRule ^fileadmin/.$ - [L]
RewriteRule ^typo3conf/.$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-l
RewriteRule .* index.php Start rewrites for Static file caching RewriteRule ^(typo3|typo3temp|typo3conf|t3lib|tslib|fileadmin|uploads|screens|showpic.php)/ - [L]
RewriteRule ^home$ / [L] Don't pull *.xml, *.css etc. from the cache RewriteCond %{REQUEST_FILENAME} !^..xml$
RewriteCond %{REQUEST_FILENAME} !^..css$
RewriteCond %{REQUEST_FILENAME} !^.*.php$ Check for Ctrl Shift reload RewriteCond %{HTTP:Pragma} !no-cache
RewriteCond %{HTTP:Cache-Control} !no-cache NO backend user is logged in. RewriteCond %{HTTP_COOKIE} !be_typo_user [NC] NO frontend user is logged in. RewriteCond %{HTTP_COOKIE} !nc_staticfilecache [NC] We only redirect GET requests RewriteCond %{REQUEST_METHOD} GET We only redirect URI's without query strings RewriteCond %{QUERY_STRING} ^$ We only redirect if a cache file actually exists RewriteCond %{DOCUMENT_ROOT}/typo3temp/tx_ncstaticfilecache/%{HTTP_HOST}/%{REQUEST_URI}/index.html -f
RewriteRule .* typo3temp/tx_ncstaticfilecache/%{HTTP_HOST}/%{REQUEST_URI}/index.html [L] End static file caching DirectoryIndex index.html CMS is typo3. any ideas? Thanks!
Maarten0 -
Urls with or without .html ending
Hello, Can anyone show me some authority info on wheher links are better with or without a .html ending? Thanks is advance
Technical SEO | | sesertin0 -
Blocking URL's with specific parameters from Googlebot
Hi, I've discovered that Googlebot's are voting on products listed on our website and as a result are creating negative ratings by placing votes from 1 to 5 for every product. The voting function is handled using Javascript, as shown below, and the script prevents multiple votes so most products end up with a vote of 1, which translates to "poor". How do I go about using robots.txt to block a URL with specific parameters only? I'm worried that I might end up blocking the whole product listing, which would result in de-listing from Google and the loss of many highly ranked pages. DON'T want to block: http://www.mysite.com/product.php?productid=1234 WANT to block: http://www.mysite.com/product.php?mode=vote&productid=1234&vote=2 Javacript button code: onclick="javascript: document.voteform.submit();" Thanks in advance for any advice given. Regards,
Technical SEO | | aethereal
Asim0 -
Duplicate canonical URLs in WordPress
Hi everyone, I'm driving myself insane trying to figure this one out and am hoping someone has more technical chops than I do. Here's the situation... I'm getting duplicate canonical tags on my pages and posts, one is inside of the WordPress SEO (plugin) commented section, and the other is elsewhere in the header. I am running the latest version of WordPress 3.1.3 and the Genesis framework. After doing some testing and adding the following filters to my functions.php: <code>remove_action('wp_head', 'genesis_canonical'); remove_action('wp_head', 'rel_canonical');</code> ... what I get is this: With the plugin active + NO "remove action" - duplicate canonical tags
Technical SEO | | robertdempsey
With the plugin disabled + NO "remove action" - a single canonical tag
With the plugin disabled + A "remove action" - no canonical tag I have tried using only one of these remove_actions at a time, and then combining them both. Regardless, as long as I have the plugin active I get duplicate canonical tags. Is this a bug in the plugin, perhaps somehow enabling the canonical functionality of WordPress? Thanks for your help everyone. Robert Dempsey0