Crawl Test Report only shows home page and no inner site pages?
-
Hi,
My site is [removed]
When I first tried to set up a new campaign for the site, I received the error:
Roger has detected a problem:
We have detected that the root domain [removed] does not respond to web requests. Using this domain, we will be unable to crawl your site or present accurate SERP information.
I then ran a Crawl Test per the FAQ. The SEOmoz crawl report only shows my home page URL and does not have any inner site pages.
This is a Joomla site. What is the problem?
Thanks!
Dave
-
you're welcome
-
OK, no problem. Thanks for your time Stephanie!
-
Weird, I would contact the help desk for support. I'm sure they can help. Sorry I couldn't be of much assistance
-
Nope, that doesn't work. I am trying to set up the campaign for the root domain level.
-
try with www in front of it
-
I still can't create a new campaign. I don't understand why you can submit it, but I can't? Please see the attached image. Thanks!
-
Try again, I submitted it and it worked fine. The website may have been temporarily down when you tried the first time. Try again and see if it works.
-
Thanks for the reply.
Yes, I have submitted sitemaps to Google Webmaster Tools as well as Bing about one week ago.
Please advise, thanks!
-
Did you create a sitemap?
I would create a sitemap and submit to Google Webmaster Central.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to 301 trailing URLs to new domain home page - wildcard?
How would I add a redirect rule so all old domain URLs redirect to a new domain? All the old pages no longer exist on a new website. The domains have been through several CMS platforms, so it would be unnecessary to recreate them. Problem is, they're indexed in search engines from the past 10 years, so it's causing a lot of 404s. Example: search "NARI Tampa Bay" and you'll find 2 old domains: nari-tampabay.com & nari-tampabay.org. The new domain is naritb.org Those 2 old domains are now pointed to the same nameservers as the new and listed as parked domains. Here's the current rules in htaccess: <code>RewriteEngine On RewriteCond %{HTTP_HOST} ^nari-tampabay.org [NC,OR] RewriteCond %{HTTP_HOST} ^www.nari-tampabay.org [NC] RewriteRule ^(.*)$ https://www.naritb.org/$1 [L,R=301] RewriteEngine On RewriteCond %{HTTP_HOST} ^nari-tampabay.com [NC,OR] RewriteCond %{HTTP_HOST} ^www.nari-tampabay.com [NC] RewriteRule ^(.*)$ https://www.naritb.org/$1 [L,R=301]</code>
Technical SEO | | CartoMark0 -
Hey guys, for some reason my homepage has gone down in rankings though other pages on my site have not.
This is not something I have ever seen before. The site is still indexed if I search for it directly, but not in top 100 rankings for keywords even though sub-pages are ranking for the given keyword. Changes I have made recently include site transfer to wordpress, force redirect http to https removal of www by redirect and adding new property instance in Google Search Console. I have checked htaccess file and sitemap and all seem fine. ideas? Site: https://dublinSEO.co
Technical SEO | | HappyApple840 -
SERPs started showing the incorrect date next to my pages
Hi Moz friends, I've noticed since Tuesday, November 9, half of my post's meta dates have changed in regards to what appears next to the post in the search results. Although published this year, I'm getting some saying a random date in 2010! (The domain was born in 2013; which makes this even more odd). This is harming the CTR of my posts and traffic is decreasing. Some posts have gone from 200 hits a day to merely 30. As far as on our end of the website, we have not made any changes in regards to schema markup, rich snippets, etc. We have not edited any post dates. We have actually not added new content since about a week ago, and these incorrect dates have just started to appear on Tuesday. Only changes have been updating certain plugins in terms of maintenance. This is occurring on four of our websites now, so it is not just specific to one. All websites use Wordpress and Genesis theme. It looks like only half of the posts are showing weird dates we've never seen before (far off from the original published date as well as last updated date -- again, dates like 2010, 2011, and 2012 when none of our websites were even created until 2013). We cannot think of a correlation as to why certain posts are showing weird dates and others the correct. The only change we can think of that's related is back in June we changed our posts to show Last Updated date to give our readers an insight into when we changed it last (since it's evergreen content). Google started to use that date for the SERPs which was great, it actually increased traffic. I'm hoping it's a glitch and a recrawl soon may help sift it around. Anybody have experience with this? I've noticed Google fluctuates between showing our last updated date or not even showing a date at all sometimes at random. We're super confused here. Thank you in advance!
Technical SEO | | smmour2 -
Are aggregate sites penalised for duplicate page content?
Hi all,We're running a used car search engine (http://autouncle.dk/en/) in Denmark, Sweden and soon Germany. The site works in a conventional search engine way with a search form and pages of search results (car adverts).The nature of car searching entails that the same advert exists on a large number of different urls (because of the many different search criteria and pagination). From my understanding this is problematic because Google will penalize the site for having duplicated content. Since the order of search results is mixed, I assume SEOmoz cannot always identify almost identical pages so the problem is perhaps bigger than what SEOmoz can tell us. In your opinion, what is the best strategy to solve this? We currently use a very simple canonical solution.For the record, besides collecting car adverts AutoUncle provide a lot of value to our large user base (including valuations on all cars) . We're not just another leech adword site. In fact, we don't have a single banner.Thanks in advance!
Technical SEO | | JonasNielsen0 -
Can SEOMoz crawl a single page as oppose to an entire subfolder?
I would like the following page to be crawled: http://www.ob.org/_programs/water/water_index.asp Instead, SEOMoz changes the page to the following subfolder which is an invalid url: http://www.ob.org/_programs/water/
Technical SEO | | OBIAnalytics0 -
301ed Pages Still Showing as Duplicate Content in GWMT
I thank anyone reading this for their consideration and time. We are a large site with millions of URLs for our product pages. We are also a textbook company, so by nature, our products have two separate ISBNs: a 10 digit and a 13 digit form. Thus, every one of our books has at least two pages (10 digit and 13 digit ISBN page). My issue is that we have established a 301 for all the 10 digit URLs so they automatically redirect to the 13 digit page. This fix has been in place for months. However, Google still reports that they are detecting thousands of pages with duplicate title and meta tags. Google is referring to these page URLs that I already have 301ed to the canonical version many months ago! Is there anything that I can do to fix this issue? I don't understand what I am doing wrong. Example:
Technical SEO | | dfinn
http://www.bookbyte.com/product.aspx?isbn=9780321676672
http://www.bookbyte.com/product.aspx?isbn=032167667X As you can see the 10 digit ISBN page 301s to 13 digit canonical version. Google reports that they have detected duplicate title and meta tags between the two pages and there are thousands of these duplicate pages listed. To add some further context: The ISBN is just a parameter that allows us to provide content when someone searches for a product with the 10 or 13 digit ISBN. The 13 digit version of the page is the only physical page that exists, the 10 digit is only a part of the virtual URL structure of the website. This is why I cannot simply change the title and meta tags of the 10 digit pages because they only exist in the sense that the URL redirects to the 13 digit version. Also, we submit a sitemap every day of all the 13 digit pages so Google knows exactly what our physical URL structure is. I have submitted this question to GWMT forums and received no replies.0 -
How do I get rid of irrelevant back links pointing to missing pages on my site
Hi all, My site was hacked about a year ago and as a result I now have a ton of back links from irrelevant sites pointing to pages on my site that no longer exist. The followed back links section on the Competitive domain analysis tool shows about 3 pages worth of these horrible links. I have 2 questions: how bad is this for my site's SEO (which isn't good anyway, Page Rank 0) and how do I get rid of them? Any help would be much appreciated. Thanks, Andy WkXz0
Technical SEO | | getzen560 -
Importance of an optimized home page (index)
I'm helping a client redesign their website and they want to have a home page that's primarily graphics and/or flash (or jquery). If they are able to optimize all of their key sub-pages, what is the harm in terms of SEO?
Technical SEO | | EricVallee340