Crawl went from a few errors to thousands when I added Blog
-
I am new here. I recently got the errors from SEOmoz crawl on my site down to just a handful from a couple hundred. So I took the leap and moved my blog to www.mysitename.com/blog (which I see recommended here) and now my errors are in the thousands.
My blog which was a separate url has pages back to 2007.
I am not sure if it is appropriate to post my site url in a question here?
One error that really stands out is this:
Description
<dd>Using rel=canonical suggests to search engines which URL should be seen as canonical.</dd>
On my root page I have:
rel="canonical" href="http://www.mysitename.com"/>
Thanks for any help...
-
Thats not the ones you get with a pro account, but it prooably does much the same.
I use the Bing SEO Toolkit to crawl your site. But SEOMoz should find much the same
-
Is this the Crawl Test?
- Resources
- Tools
- Crawl Test
-
To use SEOMoz crawler I rgink tou need to be a memeber,
You can use Bing or Google WMT tio find them.
I can assure you you have more then 2
-
Can you tell me where to look for SEOMoz list of broken links?
-
I found over 300 broken links, that was only a sample, i would suggest there is many more.
Hiow many did SEOMoz report.
-
removed link name
-
What is the URL i will tell you if they are real of not.
-
Yes it is a "notice", so it is just information to me not an error.
Thank you!
I also have several 404 Red Status Codes any idea what these are?
I read the recent posts about them and that they may disappear on a future crawl...
-
This canonical error is a notive not a error correct?
if so it is just letting you know they are there, it is important you dont have canonicals in pages and not know.
Canonicals in every page pointing to itself was seen as a good idea to help protect against screen scaping, but recently Bing has said that it is a incorrect use and that they will lose trust in your canonicls if used in this way.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz crawling doesn't show all of my Backlinks
Hello, I'm trying to make an SEO backlinks report on my website When using the Link Explorer, I see only a few backlinks while I have much more backlinks on this website. Anyone has an idea about how to fix this issue. How can I check and correct this? My website is www.signsny.com.
Moz Pro | | signsny1 -
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
SEO on-demand crawl
what happened to the on-demand crawl you could do in PRO when they switched to the new MOZ site?
Moz Pro | | Vertz-Marketing0 -
Crawl diagnostics incorrectly reporting duplicate page titles
Hi guys, I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!). Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community! Cheers, Brad
Moz Pro | | brad_dubs0 -
How to run down the actual source of a 404 error that is reported.
In my 404 errors, the second entry is as follows: URL: http://www.virginiahomesandforeclosures.com/listing/0428387-lot-k-commerce-park-franklin-va-23851/REWIDX_URL_CDNimg/no-image.gif Is there a simple way to find the root or page in which this error was generated? IF I visit this page " http://www.virginiahomesandforeclosures.com/listing/0428387-lot-k-commerce-park-franklin-va-23851" without the attached gobble de gook, I see a good page. So bottom line its possible it could be in one of my sitemaps, but I have 50 of those so its time consuming to search thru all 50 for each error like this since I have so many. I am pretty sure its not in my sitemaps, since google has not picked up any of these errors and they have crawled over 12,000 urls so far. When google gives me a 404 error I can click on the link and find what pages they found the link and go there and correct it at the root. Any suggestions would be greatly appreciated. I have more than 1,000 of these errors with the bad url with the junk attached to the end and have not been able to isolate the cause yet. Thanks in advance.
Moz Pro | | tommytx0 -
Campaign crawl re - schedule
Hello, On the last crawl of a website of mine, seomoz pointed out about 1500 errors (ouch!) on my site. I have made some corrections and i just want to see if they are at the right way but the next crawl is in a week. Is there any way so i can force a crawl before the scheduled date? Thanks!
Moz Pro | | Tz_Seo0 -
Has anyone else not had an SEOmoz crawl since Dec 22?
Before the holidays, I completed a website redesign on an eCommerce website. One of the reasons for this was duplicate content. The new design has removed all duplicate content. (Product descriptions on 2 pages) I took a look at my Crawl Diagnostics Summary this morning and this is what I saw: Last Crawl Completed: Dec. 15th, 2011 Next Crawl Starts: Dec. 22nd, 2011 Thinking it might have something to do with the holidays. Although, I would like to see this data as soon as possible. Is there a way I can request a crawl from seomoz ? Thanks, John Parker
Moz Pro | | JohnParker27920