Crawl Diagnostic Errors
-
Hi there,
Seeing a large number of errors in the SEOMOZ Pro crawl results. The 404 errors are for pages that look like this:
http://www.example.com/2010/07/blogpost/http:%2F%2Fwww.example.com%2F2010%2F07%2Fblogpost%2F
I know that t%2F represents the two slashes, but I'm not sure why these addresses are being crawled. The site is a wordpress site. Anyone seen anything like this?
-
Yep, i think you nailed it. I crawled another 2 sites I manage, one has sexy bookmarks, one doesn't. The one with had 404 errors. A quick search for sexy bookmarks causes 404 had some results as well.
You're right about the issue with the other plugin, commentluv. Will definitely take that suggestion to the developer.
And a hat trick, you're right about the block of latest from the blog on the footer. Been meaning to take that out for ages.
Very grateful for your attention and wisdom! Thank you!
-
Ross, it seems you have a plugin for comments which adds a link to the last post of the person who made the comment. This is an interesting plugin which i have not seen before. There are two problems I see with the plugin. First, it identifies links to your own site as external, when they should be tagged as internal. Secondly, it probably shouldn't be used to link to the current page. Debbi's comment is a link asking readers to view her latest article, which is the current page.
There is also a link to the current article under Recent Posts. It would be a great advancement for the plugin if it could identify the current URL and not include it in the list.
There is also a footer section "Latest from blog" which offers a link to the post. In my opinion offering the same links in the Recent Posts side bar and the "Latest from blog" footer is excessive, and since footer links aren't used very much I would recommend removing the footer block.
The fourth link to the article I located on the page is from a plugin which is referred to as "Shareaholic TopSharingBar SexyBookmarks". The link is contained within javascript.
All of the above 4 links are valid links and should not be the source of the 404 error.
And finally I believe I just now discovered the root cause of this issue. It seems to be your "Shareaholic" plugin. Try disabling it and then crawling your site again. The 404 error should disappear.
The URL you shared, in the exact format you shared it, is present in your site's HTML code in a line which begins with the following code:
-
will do and thank you for your insight!
-
I just started a SEOmoz crawl for your site. It will take some time to complete. Once the report is available I'll take a look.
Since you removed a plug in, the results may not be the same. You may have resolved the issue. Please refrain from making further changes until the crawl is complete.
-
Okay sure. Embarassingly enough, it's my own site at bayareaseo.net.
http://www.bayareaseo.net/2011/11/things-that-can-mess-up-your-google-places-rankings/
is referring to in SEOMOZ crawler
and in GWT the original url refers to
http://www.bayareaseo.net/2011/11/things-that-can-mess-up-your-google-places-rankings/<a< p=""></a<>
Just removed a "related posts" style plug in, not sure if that's the culprit.
-
It doesn't make sense to me that the referrer is the page itself. If you are willing to share your site's URL and the specific URL which is having an issue I can perform a crawl and offer more details.
-
The referrer is the page itself. Examined the code and I'm not seeing any links that match, with or without the funky markup, i.e. searching for
http://www.example.com/2010/07/blogpost/http:%2F%2Fwww.example.com%2F2010%2F07%2Fblogpost%2F
as well as
http://www.example.com/2010/07/blogpost/http://www.example.com/2010/07/blogpost/
I'm thinking it's down to one of two WP plugins causing the error. Found similar results in GWT, with many 404s referring from themselves as
http://www.example.com/page<a< p=""></a<>
Will disable the plugins and report back after the next crawl
-
The crawler normally will start on your site's home page and move through all the html code on the home page, then crawl each and every link on the home page following it throughout your site. If you are seeing these errors on your crawl report then the links are on your site.
Examine your crawl report and look for the REFERRER field. This field indicates the page which contains the link. If you can't see the link on the page itself, right-click on the page and choose View Page Source, then do a search of the html code (CTRL+F) for the link.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved How does Moz compile the "Important pages on your site are returning a 4xx error!" report?
I have over 200 links in this report (mostly from a staging site). I have deleted that staging site and I cannot find the reference to the other links. So my question is, where is Moz finding these links?
Moz Pro | | nomad_blogger0 -
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Duplicate Errors found in my search
I have run my 1st site check with SEOMOZ and have 4000+ errors. The "duplicate Page Content" culprit appears to be a extended url that keeps showing as duplicating. This is only a customer log-in and can be redirected back to the main cust log in page, but is there a short way of doing it (rather than 4000x 301's)? The format of the url is: http://www.????.com.au/default/customer/account/login/referer/aSR0cDovL3d3dy1234YWNiYW Thanks
Moz Pro | | Paul_MC0 -
Why is my domain not being crawled anymore?
I just noticed that right around 12/1/2012, SEOMoz stopped crawling all but two pages out of the 400 or so on my website at www.TrustworthyCare.com . I speculate that this is probably due to some dumb mistake I made at that time, but I can't for the life of me figure out what that mistake was. Before that, the weekly crawls included all 400 or so pages. I wonder whether it's something that changed in our .htaccess file. Here's how that file looks now; can anyone see what is wrong there, or perhaps offer other suggestions if it doesn't look like anything is wrong in it? Thanks! Tim PS - I'm a small business owner, not an SEO or software engineer. PPS - I found and read this page, but I've pretty much tried the things described there (I think): https://seomoz.zendesk.com/entries/409821-why-isn-t-my-site-being-crawled-you-re-not-crawling-all-my-pages ================================= RewriteCond %{HTTP_HOST} ^aservantsheartcare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.aservantsheartcare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^aservantsheartcaremanagement.com$ [OR]RewriteCond %{HTTP_HOST} ^www.aservantsheartcaremanagement.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^aservantsheartgeriatriccare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.aservantsheartgeriatriccare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^aservantsheartgeriatriccaremanagement.com$ [OR]RewriteCond %{HTTP_HOST} ^www.aservantsheartgeriatriccaremanagement.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^aservantshearthomecare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.aservantshearthomecare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^aservantsheartseniorcare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.aservantsheartseniorcare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^aservantsheartservices.com$ [OR]RewriteCond %{HTTP_HOST} ^www.aservantsheartservices.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^careforparents.com$ [OR]RewriteCond %{HTTP_HOST} ^www.careforparents.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^eldercareradio.com$ [OR]RewriteCond %{HTTP_HOST} ^www.eldercareradio.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^helpforyourparents.com$ [OR]RewriteCond %{HTTP_HOST} ^www.helpforyourparents.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^privatedutyseniorcare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.privatedutyseniorcare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^sandiegocaremanagement.com$ [OR]RewriteCond %{HTTP_HOST} ^www.sandiegocaremanagement.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^sandiegocaremanager.com$ [OR]RewriteCond %{HTTP_HOST} ^www.sandiegocaremanager.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^sandiegogeriatriccaremanagement.com$ [OR]RewriteCond %{HTTP_HOST} ^www.sandiegogeriatriccaremanagement.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^sandiegogeriatriccaremanager.com$ [OR]RewriteCond %{HTTP_HOST} ^www.sandiegogeriatriccaremanager.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^servantsheartcare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.servantsheartcare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^servantshearthomecare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.servantshearthomecare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^servantsheartseniorcare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.servantsheartseniorcare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^tlccare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.tlccare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^tlcseniorcenter.com$ [OR]RewriteCond %{HTTP_HOST} ^www.tlcseniorcenter.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^tlcseniorhomecare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.tlcseniorhomecare.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] RewriteCond %{HTTP_HOST} ^tlcseniorservices.com$ [OR]RewriteCond %{HTTP_HOST} ^www.tlcseniorservices.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] #php_value upload_max_filesize 8MRewriteCond %{HTTP_HOST} ^trustworthycare.com$RewriteRule ^(.)$ "http://www.trustworthycare.com/$1" [R=301,L] RewriteCond %{HTTP_REFERER} !^$RewriteCond %{HTTP_REFERER} !^http://blog.trustworthycare.com/.$ [NC]RewriteCond %{HTTP_REFERER} !^http://blog.trustworthycare.com$ [NC]RewriteCond %{HTTP_REFERER} !^http://test.trustworthycare.com/.$ [NC]RewriteCond %{HTTP_REFERER} !^http://test.trustworthycare.com$ [NC]RewriteCond %{HTTP_REFERER} !^http://trustworthycare.com/.$ [NC]RewriteCond %{HTTP_REFERER} !^http://trustworthycare.com$ [NC]RewriteCond %{HTTP_REFERER} !^http://www.blog.trustworthycare.com/.$ [NC]RewriteCond %{HTTP_REFERER} !^http://www.blog.trustworthycare.com$ [NC]RewriteCond %{HTTP_REFERER} !^http://www.test.trustworthycare.com/.$ [NC]RewriteCond %{HTTP_REFERER} !^http://www.test.trustworthycare.com$ [NC]RewriteCond %{HTTP_REFERER} !^http://www.trustworthycare.com/.$ [NC]RewriteCond %{HTTP_REFERER} !^http://www.trustworthycare.com$ [NC]RewriteCond %{HTTP_REFERER} !^http://www.trustworthycare.com/images/files_for_service_inquiries/.$ [NC]RewriteCond %{HTTP_REFERER} !^http://www.trustworthycare.com/images/files_for_service_inquiries$ [NC]RewriteCond %{HTTP_REFERER} !^http://sandbox.trustworthycare.com/.$ [NC]RewriteCond %{HTTP_REFERER} !^http://sandbox.trustworthycare.com$ [NC]RewriteRule ..(jpg|jpeg|gif|png|bmp)$ - [F,NC] RewriteCond %{HTTP_HOST} ^ashsc.com$ [OR]RewriteCond %{HTTP_HOST} ^www.ashsc.com$RewriteRule ^/?$ "http://trustworthycare.com/" [R=301,L] # BEGIN W3TC Browser Cache BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html Header append Vary User-Agent env=!dont-vary AddOutputFilterByType DEFLATE text/css application/x-javascript text/x-component text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon <filesmatch ".(css|js|htc|css|js|htc)$"=""></filesmatch> FileETag None Header set X-Powered-By "W3 Total Cache/0.9.2.5" <filesmatch ".(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$"=""></filesmatch> FileETag None Header set X-Powered-By "W3 Total Cache/0.9.2.5" <filesmatch ".(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$"=""></filesmatch> FileETag None Header set X-Powered-By "W3 Total Cache/0.9.2.5" # END W3TC Browser Cache# BEGIN W3TC Page Cache core RewriteEngine On RewriteBase / RewriteRule ^(./)?w3tc_rewrite_test$ $1?w3tc_rewrite_test=1 [L] RewriteCond %{HTTP:Accept-Encoding} gzip RewriteRule . - [E=W3TC_ENC:gzip] RewriteCond %{REQUEST_METHOD} !=POST RewriteCond %{QUERY_STRING} ="" RewriteCond %{HTTP_HOST} =www.trustworthycare.com RewriteCond %{REQUEST_URI} /$ [OR] RewriteCond %{REQUEST_URI} (sitemap(index)?.xml(.gz)?|[a-z0-9-]+-sitemap([0-9]+)?.xml(.gz)?) [NC] RewriteCond %{REQUEST_URI} !(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|/feed/|wp-.*.php|index.php) [NC,OR] RewriteCond %{REQUEST_URI} (wp-comments-popup.php|wp-links-opml.php|wp-locations.php) [NC] RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|wordpress[a-f0-9]+|wordpress_logged_in) [NC] RewriteCond %{HTTP_USER_AGENT} !(W3\ Total\ Cache/0.9.2.5) [NC] RewriteCond "%{DOCUMENT_ROOT}/sitectrl/wp-content/w3tc/pgcache/%{REQUEST_URI}/_index%{ENV:W3TC_UA}%{ENV:W3TC_REF}%{ENV:W3TC_SSL}.html%{ENV:W3TC_ENC}" -f RewriteRule .* "/sitectrl/wp-content/w3tc/pgcache/%{REQUEST_URI}/_index%{ENV:W3TC_UA}%{ENV:W3TC_REF}%{ENV:W3TC_SSL}.html%{ENV:W3TC_ENC}" [L]# END W3TC Page Cache core# BEGIN WordPressRewriteEngine OnRewriteBase /RewriteRule ^index.php$ - [L]RewriteCond %{REQUEST_FILENAME} !-fRewriteCond %{REQUEST_FILENAME} !-dRewriteRule . /index.php [L] # END WordPressRewriteCond %{HTTP_HOST} ^privatedutycare.com$ [OR]RewriteCond %{HTTP_HOST} ^www.privatedutycare.com$RewriteRule ^/?$ "http://www.ageassistance.com" [R=301,L] =================================
Moz Pro | | tcolling0 -
What tools can I use to crawl a site which uses #! hasbhang?
I have a site which was created in a way that it uses hasbang #!. I am using 3 different SEO tools and they can't seem to crawl the website. Or what suggestion can you give me in dealing with hasbang. Any ideas please. Thanks a lot for your help. Allan
Moz Pro | | AllanDuncan0 -
Recent SEOMoz Crawl = Strange Results
Did anyone else get some really strange results in their weekly crawls this week with the campaign tool? Either my ranks sky rocked across three different sites or the tools is busted. Something to the tune of having 4 pages ranking in the top 30 to now having 15-16 pages ranking in the top 30. I'd love to find out it is just all the hard work paying off but i am worried it is the later. Regards - Kyle
Moz Pro | | kchandler0 -
Most of the time getting error.
Hi, i am getting this error most of the time in linkscape since last month. Sorry dude, no inlinks found matching this criteria. Pl guide is this a bug and the sites I am trying to use linkscape for were having lot of pages crawled earlier by SEOMOZ. Thanks, Preet
Moz Pro | | PreetSibia0