Crawl Test Report only shows home page and no inner site pages?
-
Hi,
My site is [removed]
When I first tried to set up a new campaign for the site, I received the error:
Roger has detected a problem:
We have detected that the root domain [removed] does not respond to web requests. Using this domain, we will be unable to crawl your site or present accurate SERP information.
I then ran a Crawl Test per the FAQ. The SEOmoz crawl report only shows my home page URL and does not have any inner site pages.
This is a Joomla site. What is the problem?
Thanks!
Dave
-
you're welcome
-
OK, no problem. Thanks for your time Stephanie!
-
Weird, I would contact the help desk for support. I'm sure they can help. Sorry I couldn't be of much assistance
-
Nope, that doesn't work. I am trying to set up the campaign for the root domain level.
-
try with www in front of it
-
I still can't create a new campaign. I don't understand why you can submit it, but I can't? Please see the attached image. Thanks!
-
Try again, I submitted it and it worked fine. The website may have been temporarily down when you tried the first time. Try again and see if it works.
-
Thanks for the reply.
Yes, I have submitted sitemaps to Google Webmaster Tools as well as Bing about one week ago.
Please advise, thanks!
-
Did you create a sitemap?
I would create a sitemap and submit to Google Webmaster Central.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
When rogerbot tried to crawl my site it gets a 404\. Why?
When rogerbot tries to craw my site it tries http://website.com. My website then tries to redirect to http://www.website.com and is throwing a 404 and ends up not getting crawled. It also throws a 404 when trying to read my robots.txt file for some reason. We allow rogerbot user agent so unsure whats happening here. Is there something weird going on when trying to access my site without the 'www' that is causing the 404? Any insight is helpful here. Thanks,
Technical SEO | | BlakeBooth0 -
Duplicate Landing Pages showing up in search results
Hey Guys, I recently noticed that our Christmas Gifts landing page was ranking twice in the Google serps for the query "Christmas Gifts." One of these pages is an old url that has already been 301 redirected to the new url page which is also showing up in the search results. In the results, the following shows up in position 2 & 3 for the Christmas Gifts query: <cite class="_Rm">www.uncommongoods.com/gifts/christmas/christmas-gifts</cite> <cite class="_Rm">www.uncommongoods.com/occasions/christmas-gifts/christmas-gifts</cite>The url with "occasions" in it has already been 301 redirected to the url above it. Not sure why this is still showing up. I know it takes Google some time to index 301s and sometimes they show old urls, but it's been a few months since the old "occasions" url was redirected.The title tags for these pages are different but they are actually the same page. The new "gifts" version of the url was made live in the Navigation of our site just last week and before that it was hidden from our Navigation. Would this be the reason it's now showing up in search?Any ideas on why this might be happening? ThanksExplanations?
Technical SEO | | znotes0 -
GWT False Reporting or GoogleBot has weird crawling ability?
Hi I hope someone can help me. I have launched a new website and trying hard to make everything perfect. I have been using Google Webmaster Tools (GWT) to ensure everything is as it should be but the crawl errors being reported do not match my site. I mark them as fixed and then check again the next day and it reports the same or similar errors again the next day. Example: http://www.mydomain.com/category/article/ (this would be a correct structure for the site). GWT reports: http://www.mydomain.com/category/article/category/article/ 404 (It does not exist, never has and never will) I have been to the pages listed to be linking to this page and it does not have the links in this manner. I have checked the page source code and all links from the given pages are correct structure and it is impossible to replicate this type of crawl. This happens accross most of the site, I have a few hundred pages all ending in a trailing slash and most pages of the site are reported in this manner making it look like I have close to 1000, 404 errors when I am not able to replicate this crawl using many different methods. The site is using a htacess file with redirects and a rewrite condition. Rewrite Condition: Need to redirect when no trailing slash RewriteCond %{REQUEST_FILENAME} !-f
Technical SEO | | baldnut
RewriteCond %{REQUEST_FILENAME} !.(html|shtml)$
RewriteCond %{REQUEST_URI} !(.)/$
RewriteRule ^(.)$ /$1/ [L,R=301] The above condition forces the trailing slash on folders. Then we are using redirects in this manner: Redirect 301 /article.html http://www.domain.com/article/ In addition to the above we had a development site whilst I was building the new site which was http://dev.slimandsave.co.uk now this had been spidered without my knowledge until it was too late. So when I put the site live I left the development domain in place (http://dev.domain.com) and redirected it like so: <ifmodule mod_rewrite.c="">RewriteEngine on
RewriteRule ^ - [E=protossl]
RewriteCond %{HTTPS} on
RewriteRule ^ - [E=protossl:s] RewriteRule ^ http%{ENV:protossl}://www.domain.com%{REQUEST_URI} [L,R=301]</ifmodule> Is there anything that I have done that would cause this type of redirect 'loop' ? Any help greatly appreciated.\0 -
Pages Linking to Sites that Return 404 Error
We have just a few 404 errors on our site. Is there any way to figure out which pages are linking to the pages that create 404 errors? I would rather fix the links than create new 301 redirects. Thanks!
Technical SEO | | jsillay0 -
Numerous 404 errors on crawl diagnostics (non existent pages)..
As new as them come to SEO so please be gentle.... I have a wordpress site setup for my photography business. Looking at my crawl diagnostics I see several 4xx (client error) alerts. These all show up to non existent pages on my site IE: | http://www.robertswanigan.com/happy-birthday-sara/109,97,105,108,116,111,58,104,116,116,112,58,47,47,109,97,105,108,116,111,58,105,110,102,111,64,114,111,98,101,114,116,115,119,97,110,105,103,97,110,46,99,111,109 | Totally lost on what could be causing this. Thanks in advance for any help!
Technical SEO | | Swanny8110 -
Noindex search result pages Add Classifieds site
Dear All, Is it a good idea to noindex the search result pages of a classified site?
Technical SEO | | te_c
Taking into account that category pages are also search result pages, I would say it is not a good idea, but the whole information is in the sitemap, google can index individual listings (which are index, follow) anyway. What would you do? What kind of effects has in the indexing of the site, marking the search result pages as "search results" with schema.org microdata? Many thanks for your help, Best Regards, Daniel0 -
Duplicate Content for our Advertising Sites Showing in Search Results
Hello, My company has a couple different sites (Magento Stores) for Organic, Adwords and AdCenter purposes.They are mirror sites of each except for phone number, contact form, ect. Here is our organic site: http://www.oxygenconcnetratorstore.com/ Adwords and Adcenter site respectively: http://www.oxygenconcnetratorstore.com/portable/
Technical SEO | | chuck-layton
http://www.oxygenconcnetratorstore.com/oxygen/ The problem is, both the Adwords and AdCenter stores appear in Google SERP when you put in the exact URL. I have "noindex/nofollow" tag on both the advertising sites but they are still showing in search results. I feel we are getting hurt for basically have 3 sites of duplicate content. Is there a reason why the sites would be showing in search results even with the nofollow/index tags?? Any help would be awesome. Thanks. seomoz.jpg0 -
On-Page Report Says 'F', and I'm Confoozled As to Why
I'm primarily interested in how we failed in our "Broad Keyword Usage in Title" category. The Keyword Pair we're gunnin' for is: "Mac Windows" Our current page title is: "CrossOver: Windows on Mac and Linux with the easiest and most affordable emulator - CodeWeavers" This is, I grant, ugly. However, bear with me. SEOMoz Report Card says "Easy Fix!" and suggests: "Employ the keyword in the page title, preferrably as the first words in the element." I humbly submit that "Mac" and "Windows" IS in the page title. So what am I missing? Is it the placement of the words relative to each other, or relative to the start of the sentence? Or is the phrase "CrossOver:" somehow blocking the rest of the sentence from being read? Are colons evil? I'm genuinely mystified as to why (from a structural standpoint) our existing title tag is failing this test, and I'd be delighted for answers and/or feedback. Thanks in advance.
Technical SEO | | CodeWeavers0