Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Is it better to use XXX.com or XXX.com/index.html as canonical page
-
Is it better to use 301 redirects or canonical page? I suspect canonical is easier. The question is, which is the best canonical page, YYY.com or YYY.com/indexhtml? I assume YYY.com, since there will be many other pages such as YYY.com/info.html, YYY.com/services.html, etc.
-
Glad you got it sorted out. If you're 301-redirecting a lot of domains, I'd suggest doing it gradually or maybe holding off on the lowest-quality domains. Google can see a massive set of redirects as a bit of a red flag (too many people have bought up cheap domains and 301-redirected to consolidate the link equity). If the domains are really all closely related or if you're only talking about a handful (<5) then it's probably not a big issue.
-
I think things may be sorted out, but I am not sure. I actually put in 301-redirects from a bunch of domains that I own to this new domain, the content of which will eventually replace my main domain. But, I need to get the domain properly set up and optimized before I move it to my primary domain to replace the ancient web site. At that time, I will also redirect this site to the new, old site.
I used to have Google ad-words tied to some of the domains that I 301-redirected to the new web site that I am building. Those were just a waste of money, however, so I put them on hold. I also had a lot of problems with semel and buttons for web bouncing off those pages that I re-directed. I put in .htaccess commands to stop those spam sites and that seems to work.
-
Google seems to be indexing 30-ish pages, but when I look at the cached home-page, I'm actually seeing the home-page of http://rfprototype.com/. Did you recently change domains or 301-redirect the old site? The cache data is around Christmas (after the original question was posted), so I think we're missing part of the puzzle here.
-
So, I think I may have had things wrong. For one thing, it seems like moz and Google are only indexing 2 pages, while the site index shows something like 80 pages. (I suspect an image is a page, and there are a lot of images. But, there are about 10 or 12 distinct pages at the moment. Also, Google and moz do not seem to show the correct key words in any sense like they should, leading me to think that they were just spidering 2 pages. I don't know why. I added the following to my index.html header:
and
I assume I put them in the correct place. I also believe I don't need canonical pages anywhere else.
Should these changes to my index.html make the proper changes?
-
Yeah, I'd have to concur - all the evidence and case studies I've seen suggest that rel=canonical almost always passes authority (link equity). There are exceptions, but honestly, there are exceptions with 301s, too.
I think the biggest difference, practically, is the impact on human visitors. 301-redirects take people to a new page, whereas canonical tags don't.
-
In terms of rel=canonical that will pass value the same as a 301 redirect - for evidence have a look here:
http://moz.com/learn/seo/canonicalization
"Another option for dealing with duplicate content is to utilize the rel=canonical tag. The rel=canonical tag passes the same amount of link juice (ranking power) as a 301 redirect, and often takes much less development time to implement."
See DR Pete's response in this Moz Q&A:
http://moz.com/community/q/do-canonical-tags-pass-all-of-the-link-juice-onto-the-url-they-point-to
http://googlewebmastercentral.blogspot.co.uk/2009/02/specify-your-canonical.html
https://support.google.com/webmasters/answer/139066?rd=1
http://searchenginewatch.com/sew/how-to/2288690/how-and-when-to-use-301-redirects-vs-canonical
Matts Cutts stated there is not a whole lot of difference between the 301 and the canonical - they will both lose "just a tiny little amount bit, not very much at all" of credit from the referring page.
-
Ok, this is how I look at the situation.
So you have two URLs and the question is either to redirect301 or use canonical? In my opinion 301 is a better solution and this is because it will not only redirect people to the preferred version but the link value as well.
Whereas, with canonicals only search engines will know what is the preferred page but it will not transfer the link value which can help you with organic rankings.
Hope this helps!
-
You would put the canonical link in the index file and I would point that at the xxx.com version rather than the xxx.com/index.html version as people visiting your sites homepage are going to enter the domain and not the specific page so xxx.com rather than xxx.com/index.html.
There are some great articles on Moz explaining all this which I would suggest that you read -
http://moz.com/learn/seo/canonicalization
Dr Pete also did this post answering common questions on rel=canonical.
http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
In terms of 301 redirects and canonicalization both pass the same amount of authority gained by different pages. If you are trying to keep it as clean as possible you need to be careful you don't create an issue redirecting your index file to your domain - here is an old post explaining how moz solved this 301 redirect on an Apache server
http://moz.com/blog/apache-redirect-an-index-file-to-your-domain-without-looping
I personally find that if all your links on your site reference your preferred(canonical) URL for the homepage so in this case xxx.com and you redirect the www version to this or vice versa depending on your preference then you add a canonical in the index.html file pointing at xxx.com in this case unless you prefer to do it the other way round with www.xxx.com for both you will be fine.
Hope this helps
-
I forgot. Of course, there is no xxx.com page, per se. It is actually xxx.com/index.html so if you needed to put the canonical reference on xxx.com, how would you do it?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sudden Indexation of "Index of /wp-content/uploads/"
Hi all, I have suddenly noticed a massive jump in indexed pages. After performing a "site:" search, it was revealed that the sudden jump was due to the indexation of many pages beginning with the serp title "Index of /wp-content/uploads/" for many uploaded pieces of content & plugins. This has appeared approximately one month after switching to https. I have also noticed a decline in Bing rankings. Does anyone know what is causing/how to fix this? To be clear, these pages are **not **normal /wp-content/uploads/ but rather "index of" pages, being included in Google. Thank you.
Technical SEO | | Tom3_150 -
We switched the domain from www.blog.domain.com to domain.com/blog.
We switched the domain from www.blog.domain.com to domain.com/blog. This was done with the purpose of gaining backlinks to our main website as well along with to our blog. This set us very low in organic traffic and not to mention, lost the backlinks. For anything, they are being redirected to 301 code. Kindly suggest changes to bring back all the traffic.
Technical SEO | | arun.negi0 -
Investigating a huge spike in indexed pages
I've noticed an enormous spike in pages indexed through WMT in the last week. Now I know WMT can be a bit (OK, a lot) off base in its reporting but this was pretty hard to explain. See, we're in the middle of a huge campaign against dupe content and we've put a number of measures in place to fight it. For example: Implemented a strong canonicalization effort NOINDEX'd content we know to be duplicate programatically Are currently fixing true duplicate content issues through rewriting titles, desc etc. So I was pretty surprised to see the blow-up. Any ideas as to what else might cause such a counter intuitive trend? Has anyone else see Google do something that suddenly gloms onto a bunch of phantom pages?
Technical SEO | | farbeseo0 -
Canonical tag for Home page: with or without / at the end???
Setting up canonical tags for an old site. I really need advice on that darn backslash / at the end of the homepage URL. We have incoming links to the homepage as http://www.mysite.com (without the backslash), and as http://www.mysite.com/ (with the backslash), and as http://www.mysite.com/index.html I know that there should be 301 redirects to just one version, but I need to know more about the canonical tags... Which should the canonical tag be??? (without the backslash) or (with the backslash) Thanks for your help! 🙂
Technical SEO | | GregB1230 -
Am I Wasting my time using pingler.com
Ok so here is the question. A few months ago i decided to join pingler.com and pay for the service as i was using the free service, but after four months now i have not noticed any changes and i am just wondering if i am wasting my time using the paid service. would love to hear from people who have or are using the service and let me know if this is a waste of time and my money could be better spent elsewhere. look forward to hearing your thoughts
Technical SEO | | ClaireH-1848860 -
Can you 301 redirect a page to an already existing/old page ?
If you delete a page (say a sub department/category page on an ecommerce store) should you 301 redirect its url to the nearest equivalent page still on the site or just delete and forget about it ? Generally should you try and 301 redirect any old pages your deleting if you can find suitable page with similar content to redirect to. Wont G consider it weird if you say a page has moved permenantly to such and such an address if that page/address existed before ? I presume its fine since say in the scenario of consolidating departments on your store you want to redirect the department page your going to delete to the existing pages/department you are consolidating old departments products into ?
Technical SEO | | Dan-Lawrence0 -
Adding Rel Canonical to multiple pages
Hi, Our CMS generates a lot of duplicate content, (Different versions of every page for 3 different font sizes). There are many other reasons why we should drop this current CMS and go with something else, and we are in the process of doing that. But for now, does anyone know how would I do the following: I've created a spreadsheet that contains the following: Column 1: rel="canonical" tag for URL Column 2: Duplicate Content URL # 1 Column 3: Duplicate Content URL # 2 Column 4: Duplicate Content URL # 3 I want to add the tag from column 1 into the head of every page from column 2,3, and 4. What would be a fast way to do this considering that I have around 1800 rows. Check the screenshot of the builtwith.com result to see more information about the website if that helps. Farris bxySL
Technical SEO | | jdossetti0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0