Crawl depth and www
-
I've run a crawl on a popular amphibian based tool, just wanted to confirm... should http://www.homepage be at crawl depth 0 or 1?
The audit shows http://homepage at level 0 and http://www.homepage at level 1 through a redirect.
Thanks
-
School boy error! I put http: version into the crawler so it assumes www. is a subdomain of that. Therefore naturally at level 1. By crawling www.version it shows that to be level 0
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best practice to seperate different locations and languages in an URL? At the moment the URL is www.abc.com/ch/de. Is there a better way to structure the URL from an SEO perspective?
I am looking for a solution for using a new URL structure without using www.abc.com**/ch/de** in the URL to deliver the right languages in specific countries where more than one language are spoken commonly. I am looking forward to your ideas!
Technical SEO | | eviom0 -
Has Google Stopped Listing URLs with Crawl Errors in Webmaster Tools?
I went to Google Webmaster Tools this morning and found that one of my clients had 11 crawl errors. However, Webmaster Tools is not showing which URLs are having experiencing the errors, which it used to do. (I checked several other clients that I manage and they list crawl errors without showing the specific URLs. Does anyone know how I can find out which URLs are experiencing problems? (I checked with Bing Webmaster Tools and the number of errors are different).
Technical SEO | | TopFloor0 -
How long after google crawl do you need 301 redirects
We have just added 301's when we moved our site. Google has done a crawl & spat back a few errors. How long do I need to keep those 301's in place? I may need to change some. Thanks
Technical SEO | | Paul_MC0 -
Can I crawl a password protected domain with SEOmoz?
Hi everyone, Just wondered if anybody has been able to use the SEOmoz site crawler for password protected domains? On Screaming Frog you are prompted for the username and password when you set the crawler running, however SEOmoz doesn't. It seems you can only crawl sites that are live and publicly available - can anyone confirm if this is the case? Cheers, M
Technical SEO | | edlondon0 -
How best to deal with www.home.com and www.home.com/index.html
Firstly, this is for an .asp site - and all my usual ways of fixing this (e.g. via htaccess) don't seem to work. I'm working on a site which has www.home.com and www.home.com/index.html - both URL's resolve to the same page/content. If I simply drop a rel canonical into the page, will this solve my dupe content woes? The canonical tag would then appear in both www.home.com and www.home.com/index.html cases. If the above is Ok, which version should I be going with? - or - Thanks in advance folks,
Technical SEO | | Creatomatic
James @ Creatomatic0 -
What to do about Google Crawl Error due to Faceted Navigation?
We are getting many crawl errors listed in Google webmaster tools. We use some faceted navigation with several variables. Google sees these as "500" response code. It looks like Google is truncating the url. Can we tell Google not to crawl these search results using part of the url ("sort=" for example)? Is there a better way to solve this?
Technical SEO | | EugeneF0 -
Crawl Tool Producing Random URL's
For some reason SEOmoz's crawl tool is returning duplicate content URL's that don't exist on my website. It is returning pages like "mydomain.com/pages/pages/pages/pages/pages/pricing" Nothing like that exists as a URL on my website. Has anyone experienced something similar to this, know what's causing it, or know how I can fix it?
Technical SEO | | MyNet0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0