Cnnonical Issue! Plz Help
-
Hi, I'm having this problem for one of my website, say www.abc.com. Certain information in the site is long and thus required to be put into several pages. For example, let say there is a section for the "List of Business Schools in Canada", this is a huge list and thus divided into several pages. The main URL is like this www.abc.com/business-schools/list-of-business-schools-in-canada.html & after on its goes on like
www.abc.com/business-schools/list-of-business-schools-in-canada1.html
www.abc.com/business-schools/list-of-business-schools-in-canada2.html
www.abc.com/business-schools/list-of-business-schools-in-canada3.html Etc.
Now as Google is considering these pages as canonical what should I do suppose do what with it? I've examine that rel="canonical" tag is used on every pages (canada1.html, canada2.html etc.) and the canonical URL is set to the main list-of-business-schools-in-canada.html page. So, why is that Google is picking this up as canonical? Have I made a mistake in placing the rel= canonical tag ? Please suggest. Thanks in advance,
-
Thanks Dr. it really helped.
-
Agreed - rel=prev/next is probably more appropriate here. Google prefers that you not rel-canonical a paginated series back to page 1. You are allowed to canonical to a "View All" version, but then they prefer you have a "View All" link for users as well. It depends a bit on how many pages we're talking about and if there are any options that complicate the URLs, like sorts and filters.
-
I'm not sure I fully understand your query, but there is a specific markup you can use for pagination: http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
-
What is the message you are getting from Google exactly?
Canonical can refer to the www or non-www part of the url.
Here is Matt Cutts of Google's explanation-
Q: What is a canonical url? Do you have to use such a weird word, anyway?
A: Sorry that it’s a strange word; that’s what we call it around Google. Canonicalization is the process of picking the best url when there are several choices, and it usually refers to home pages. For example, most people would consider these the same urls:But technically all of these urls are different. A web server could return completely different content for all the urls above. When Google “canonicalizes” a url, we try to pick the url that seems like the best representative from that set.
You can set the www or non-www in htaccess and in Google Webmaster.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is robots.txt file issue?
I hope you are well. Mostly moz send me a notification that your website can,t be crawled and it says me o check robots.txt file. Now the Question is how can solve this problem and what should I write in robots.txt file? Here is my website. https://www.myqurantutor.com/ need your help brohers.... and Thanks in advance
On-Page Optimization | | matee.usman0 -
Fixing Index Errors in the new Google Search Console - Help
Hi, So I have started using the new Search Console and for one of my clients, there are a few 'Index Coverage Errors'. In the old version you could simply, analyse, test and then mark any URLs as fixed - does anyone know if that is possible in the new version? There are options to validate errors but no 'mark as fixed' options. Do you need to validate the errors before you can fix them?
On-Page Optimization | | daniel-brooks0 -
Help with Temporary Redirects on Pages
Hi Guys, My latest crawl shows this: "44% of site pages served 302 redirects during the last crawl" When I click this to investigate the issue I see: URL:
On-Page Optimization | | jeeyer
https://www.....
https://www.....
https://www.....
https://www.....
https://www..... And under Redirect URL:
http://www....
http://www....
http://www.... I've recently read an article from Yoast that this is a https:// to http:// redirect (ofcourse 😉 ).
but why is this an issue that needs to be solved and how do I solved it? Thanks again for your help and thoughts. Joost0 -
Navigation Links Causing Too Many Links Help?
Hello, I have read some SEOMOZ search results for this, but am still concerned that Google may see 4,500 Too Many Link warnings as a problem. This is caused primarily due to our header navigation, which is not intended to be keyword stuffing, but to provide all avenues for our breadth of content. site: crazymikesapps.com. Most answers seem to advise if there is no keyword stuffing at hand don't worry about it. Any help appreciated. thank you Mike
On-Page Optimization | | crazymikesapps0 -
Help, a certain directory is not being indexed
Before I start, dont expect this to be too easy. This really has me puzzled and am surprised I am still yet to find a solution for it. Get ready. We have a wordpress website, launched over 6 months ago and have never had an issue getting content such as pages and post pages and categories indexed. However, I some what recently (about 2 months ago) installed a directory plugin (Business Directory Plugin) which lists businesses via unique urls that are accesible from a sub folder. Its these business listings that I absolutely cannot get indexed. The index page to the directory which links to the business pages is indexed, however for some reason google is not indexing all the listing pages which are linked to from this page. Its not an issue of the content being uncrawlable or at least dont think so as when I run crawlers on my site such as xml sitemap crawlers it finds all the pages including the directory pages so I am sure its not an issue of the search engines not finding the content. I have created xml sitemaps and uploaded to webmaster tools, tools recongises that there are many pages in the xml sitemap but google continues to only index a small percentage (everything but my business listings). The directory has been there for about 8 weeks now so I know there is a issue as it should of been indexed by now. See our main website at www.smashrepairbid.com.au and the business directory index page at www.smashrepairbid.com.au/our-shops/ To throw in a curve ball, in looking into this issue and setting up tools we noticed a lot of 404 error pages (nearly 4,000). We were very confused where these were coming from as they were only being generated from search engines - humans could not access the 404s and so we are guessing se's were firing some javascript code to generate them or something else weird. We could see the 404s in the logs so we know they were legit but again feel it was only search engines, this was validated when we added some rules to robots.txt and we saw the errors in the logs stop. We put the rules in robots txt file to try and stop google from indexing the 404 pages as we could not find anyway to fix the site / code (no idea what is causing them). If you do a site search in google you will see all the pages that are omitted in the results. Since adding the rules to robots, our impressions shown through tools have jumped right up (increased by 5 times) so thought this was a good indication of improvement but still not getting the results we want. Does anyone have any clue whats going on or why google and other se's are not indexing this content? Any help would be greatly appreciated and if you need any other information to assist just ask me. Really appreciate anyone who can spare their time to help me, I sure do need it. Thanks.
On-Page Optimization | | ziller0 -
Duplicate content because of content scrapping - please help
We manage brands websites in a very competitive industry that have thousands of affiliate links We see that more and more websites (mainly affiliates websites) are scrapping our brand websites content and it generate many duplicate content (but most of them link to us back with an affiliate link). Our brand websites still rank for any sentence in brackets you search in Google, Will this duplicate content hurt our brand websites ? If yes, should we take some preventive actions ? We are not able to add ongoing UGC or additional text to all our duplicate content and trying to stop those websites of stealing our content is like playing cat and mouse... Thanks for your advices
On-Page Optimization | | Tit0 -
Help I don't understand Rel Canonical
I'm really stuck on how to fix up Rel Canonical errors on a Wordpress site. I went in and changed all the URLs to remove the www and added / to the end. I get this message on page analysis details: <dt>Canonical URL</dt> <dd>"http://www.some-url.com.au/",</dd> <dd>"http://some-url..com.au/", and</dd> <dd>"http://some-url..com.au/"</dd> <dd>Well the first one with the www doesn't exists and the second two urls are the same! (Note that I have removed the actual URL for this post)</dd> <dd>I'm not sure how to read and fix the errors from the reports ether. The only issues I can see is that the 'Tag Value' has the www and the 'Page Title - URL' doesn't have the www.
On-Page Optimization | | zapprabbit
</dd>0 -
Duplicate content issues with products page 1,2,3 and so on
Hi, we have this products page, for example of a landing page:
On-Page Optimization | | Essentia
http://www.redwrappings.com.au/australian-made/gift-ideas and then we have the link to page 2,3,4 and so on:
http://www.redwrappings.com.au/products.php?c=australian-made&p=2
http://www.redwrappings.com.au/products.php?c=australian-made&p=3 In SEOmoz, they are recognized as duplicate page contents.
What would be the best way to solve this problem? One easy way i can think of is to nominate the first landing page to be the 'master' page (http://www.redwrappings.com.au/australian-made/gift-ideas), and add canonical meta links on page 2,3 and so on. Any other suggestions? Thanks 🙂0