Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Is it better to use XXX.com or XXX.com/index.html as canonical page
-
Is it better to use 301 redirects or canonical page? I suspect canonical is easier. The question is, which is the best canonical page, YYY.com or YYY.com/indexhtml? I assume YYY.com, since there will be many other pages such as YYY.com/info.html, YYY.com/services.html, etc.
-
Glad you got it sorted out. If you're 301-redirecting a lot of domains, I'd suggest doing it gradually or maybe holding off on the lowest-quality domains. Google can see a massive set of redirects as a bit of a red flag (too many people have bought up cheap domains and 301-redirected to consolidate the link equity). If the domains are really all closely related or if you're only talking about a handful (<5) then it's probably not a big issue.
-
I think things may be sorted out, but I am not sure. I actually put in 301-redirects from a bunch of domains that I own to this new domain, the content of which will eventually replace my main domain. But, I need to get the domain properly set up and optimized before I move it to my primary domain to replace the ancient web site. At that time, I will also redirect this site to the new, old site.
I used to have Google ad-words tied to some of the domains that I 301-redirected to the new web site that I am building. Those were just a waste of money, however, so I put them on hold. I also had a lot of problems with semel and buttons for web bouncing off those pages that I re-directed. I put in .htaccess commands to stop those spam sites and that seems to work.
-
Google seems to be indexing 30-ish pages, but when I look at the cached home-page, I'm actually seeing the home-page of http://rfprototype.com/. Did you recently change domains or 301-redirect the old site? The cache data is around Christmas (after the original question was posted), so I think we're missing part of the puzzle here.
-
So, I think I may have had things wrong. For one thing, it seems like moz and Google are only indexing 2 pages, while the site index shows something like 80 pages. (I suspect an image is a page, and there are a lot of images. But, there are about 10 or 12 distinct pages at the moment. Also, Google and moz do not seem to show the correct key words in any sense like they should, leading me to think that they were just spidering 2 pages. I don't know why. I added the following to my index.html header:
and
I assume I put them in the correct place. I also believe I don't need canonical pages anywhere else.
Should these changes to my index.html make the proper changes?
-
Yeah, I'd have to concur - all the evidence and case studies I've seen suggest that rel=canonical almost always passes authority (link equity). There are exceptions, but honestly, there are exceptions with 301s, too.
I think the biggest difference, practically, is the impact on human visitors. 301-redirects take people to a new page, whereas canonical tags don't.
-
In terms of rel=canonical that will pass value the same as a 301 redirect - for evidence have a look here:
http://moz.com/learn/seo/canonicalization
"Another option for dealing with duplicate content is to utilize the rel=canonical tag. The rel=canonical tag passes the same amount of link juice (ranking power) as a 301 redirect, and often takes much less development time to implement."
See DR Pete's response in this Moz Q&A:
http://moz.com/community/q/do-canonical-tags-pass-all-of-the-link-juice-onto-the-url-they-point-to
http://googlewebmastercentral.blogspot.co.uk/2009/02/specify-your-canonical.html
https://support.google.com/webmasters/answer/139066?rd=1
http://searchenginewatch.com/sew/how-to/2288690/how-and-when-to-use-301-redirects-vs-canonical
Matts Cutts stated there is not a whole lot of difference between the 301 and the canonical - they will both lose "just a tiny little amount bit, not very much at all" of credit from the referring page.
-
Ok, this is how I look at the situation.
So you have two URLs and the question is either to redirect301 or use canonical? In my opinion 301 is a better solution and this is because it will not only redirect people to the preferred version but the link value as well.
Whereas, with canonicals only search engines will know what is the preferred page but it will not transfer the link value which can help you with organic rankings.
Hope this helps!
-
You would put the canonical link in the index file and I would point that at the xxx.com version rather than the xxx.com/index.html version as people visiting your sites homepage are going to enter the domain and not the specific page so xxx.com rather than xxx.com/index.html.
There are some great articles on Moz explaining all this which I would suggest that you read -
http://moz.com/learn/seo/canonicalization
Dr Pete also did this post answering common questions on rel=canonical.
http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
In terms of 301 redirects and canonicalization both pass the same amount of authority gained by different pages. If you are trying to keep it as clean as possible you need to be careful you don't create an issue redirecting your index file to your domain - here is an old post explaining how moz solved this 301 redirect on an Apache server
http://moz.com/blog/apache-redirect-an-index-file-to-your-domain-without-looping
I personally find that if all your links on your site reference your preferred(canonical) URL for the homepage so in this case xxx.com and you redirect the www version to this or vice versa depending on your preference then you add a canonical in the index.html file pointing at xxx.com in this case unless you prefer to do it the other way round with www.xxx.com for both you will be fine.
Hope this helps
-
I forgot. Of course, there is no xxx.com page, per se. It is actually xxx.com/index.html so if you needed to put the canonical reference on xxx.com, how would you do it?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blog Page Titles - Page 1, Page 2 etc.
Hi All, I have a couple of crawl errors coming up in MOZ that I am trying to fix. They are duplicate page title issues with my blog area. For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc. I was just wondering if this was an actual SEO problem or not and if there is a way to fix it. I am using Wordpress for reference but I can't see anywhere to access the settings of these pages. Thanks
Technical SEO | | O2C0 -
Getting high priority issue for our xxx.com and xxx.com/home as duplicate pages and duplicate page titles can't seem to find anything that needs to be corrected, what might I be missing?
I am getting high priority issue for our xxx.com and xxx.com/home as reporting both duplicate pages and duplicate page titles on crawl results, I can't seem to find anything that needs to be corrected, what am I be missing? Has anyone else had a similar issue, how was it corrected?
Technical SEO | | tgwebmaster0 -
Sitemap indexed pages dropping
About a month ago I noticed my pages indexed from my sitemap are dropping.There are 134 pages in my sitemap and only 11 are indexed. It used to be 117 pages and just died off quickly. I still seem to be getting consistant search traffic but I'm just not sure whats causing this. There are no warnings or manual actions required in GWT that I can find.
Technical SEO | | zenstorageunits0 -
Staging & Development areas should be not indexable (i.e. no followed/no index in meta robots etc)
Hi I take it if theres a staging or development area on a subdomain for a site, who's content is hence usually duplicate then this should not be indexable i.e. (no-indexed & nofollowed in metarobots) ? In order to prevent dupe content probs as well as non project related people seeing work in progress or finding accidentally in search engine listings ? Also if theres no such info in meta robots is there any other way it may have been made non-indexable, or at least dupe content prob removed by canonicalising the page to the equivalent page on the live site ? In the case in question i am finding it listed in serps when i search for the staging/dev area url, so i presume this needs urgent attention ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Can you have a /sitemap.xml and /sitemap.html on the same site?
Thanks in advance for any responses; we really appreciate the expertise of the SEOmoz community! My question: Since the file extensions are different, can a site have both a /sitemap.xml and /sitemap.html both siting at the root domain? For example, we've already put the html sitemap in place here: https://www.pioneermilitaryloans.com/sitemap Now, we're considering adding an XML sitemap. I know standard practice is to load it at the root (www.example.com/sitemap.xml), but am wondering if this will cause conflicts. I've been unable to find this topic addressed anywhere, or any real-life examples of sites currently doing this. What do you think?
Technical SEO | | PioneerServices0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Now that Google considers subdomains as part of the TLD I'm a little leery of testing robots.txt with something like: staging.domain.com
Technical SEO | | fthead9
User-agent: *
Disallow: / in fear it might get the www.domain.com blocked as well. Has anyone had any success using robots.txt to block sub-domains? I know I could add a meta robots tag to the staging.domain.com pages but that would require a lot more work.0 -
How Can I Block Archive Pages in Blogger when I am not using classic/default template
Hi, I am trying to block all the archive pages of my blog as Google is indexing them. This could lead to duplicate content issue. I am not using default blogger theme or classic theme and therefore, I cannot use this code therein: Please suggest me how I can instruct Google not to index archive pages of my blog? Looking for quick response.
Technical SEO | | SoftzSolutions0