What is the difference between rel canonical and 301's?
-
Hi Guys
I have been told a few times to add the rel canonical tag to my category pages - however every category page actually is different from the other - besides the listings that I have for my staff on each pages. Some of them specialise in areas that cross over in other areas - but over really if I'm re directing for eg: Psychic Readings over to Love and Relationships because 5 of my staff members are in both categories - the actual delivering of content and in depth of the actual category which skills are provided at different levels don't justify me creating a rel tag from Psychic Readings over to Love and Relationships just because i have 5 staff members listed under both categories.
Tell me have I got this right or completely wrong?
Here is an eg: Psychic Readings category https://www.zenory.com/psychic-readings
And love and relationships category - https://www.zenory.com/love-relationships
Hope this makes sense - I really look forward to your guys feedback!
Cheers
-
Understand what you mean - to be very honest I don't think that this content snippet is generating duplicate content.
However, I don't really understand the mechanism:
https://www.zenory.com/horoscopes/taurus/day -> I would expect to find the daily horoscope for Taurus - when I click on Capricorn I would expect to go to https://www.zenory.com/horoscopes/capricorn/day - however I remain on the same page & the horoscope is shown in a lightbox. I would rather put it on a separate page (if all horoscopes of all signs are present in the HTML of one sign these pages become quite similar when you look at the source code.
Sounds a bit confusing, but I hope you get what I mean.rgds,
Dirk
-
Hi Dirk
I wanted to ask you another question with regard to this.
I have horoscope pages that have just been published today.
We offer daily horoscope for each star sign (12) these are unique and different each day for each star sign, however there is a weekend love section at the bottom of each page for each star sign that is the same for the whole week.
https://www.zenory.com/horoscopes/taurus/day
https://www.zenory.com/horoscopes/aries/day
Above will show you an example of a couple of the daily horoscopes, you can see the weekend love is different - however it will be the same for the same star sign tomorrow - you can't see these as we have only published and released these today. So you will be able to tell the difference when tomorrows one is published, but hopefully I have explained myself well here.
So my question will be - half the content on a single page will be duplicate content: Besides the new daily horoscope entry. I'm wondering if I need to add canonical tags or if I should create a separate page for the weekend love horoscope of each star sign.
I hope this makes sense!
Thanks again Dirk!
-
That answers my question Dirk, thank you again!!!
-
For the examples you gave I would certainly not use a 301 or use a canonical tag. The content is unique - and only a relatively small part is common (the list)
To explain the difference:
A canonical tag is used if you have pages that are identical (or almost identical) and which are accessible under different url's. A good example is an e-commerce site with a list of articles like mysite.com/umbrellas - if by sorting the products the url is changing like mysite.com/umbrellas&sort=high it's best to put a canonical so that google will not index all the variations. If you use a canonical on the second url -pointing to the first. A visitor can however still access the pages. Google bot normally respects the canonical - but is not obliged to do so.
A 301 is different - in fact you give the message to the browser: this page is no longer available on this location but has moved to a new location. It's no longer possible to visit the original page (not for humans & not for bots). Google bot has to respect this directive.
A last option you can use is the "noindex/follow". This you normally use for pages that have very little value for search engines, but where you still would like the bots to follow and index the pages which are listed. This you can use for pages of type blog.com/tag/subject - that are generating lists with all the articles marked with subject. In general pages like this are good for cross linking, however have low value for search engines so it's better to not have them indexed.
Hope this clarifies,
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How good/bad the exit intent pop-ups? What is Google's perspective?
Hi all, We have launched the exit intent pop-ups on our website where a pop-up will appear when the visitor is about to leave the website. This will trigger when the mouse is moved to the top window section; as an attempt by the visitor to close the window. We see a slight ranking drop post this pop-up launch. As the pop-up is appearing just before someone leaves the website; does this making Google to see as if the user left because of the pop-up and penalizing us? What is your thoughts and suggestions on this? Thanks
White Hat / Black Hat SEO | | vtmoz1 -
Duplicate categories how to make sure I don't get penalized for this
Hi there How would I go about fixing duplicate categories? My products sell in multiple category areas and some overlap the other - how can I go about making sure that I don't get penalised for this? Each category and content is unique but my advisors offer different tools and insights.
White Hat / Black Hat SEO | | edward-may0 -
Why isn't a 301 redirect removing old style URLs from Google's index?
I have two questions:1 - We changed the URL structure of our site. Old URLs were in the format of kiwiforsale.com/used_fruit/yummy_kiwi. These URLs are 301 redirected to kiwiforsale.com/used-fruit/yummy-kiwi. We are getting duplicate content errors in Google Webmaster Tools. Why isn't the 301 redirect removing the old style URL out of Google's index?2 - I tried to remove the old style URL at https://www.google.com/webmasters/tools/removals, however I got the message that "We think the image or web page you're trying to remove hasn't been removed by the site owner. Before Google can remove it from our search results, the site owner needs to take down or update the content."Why are we getting this message? Doesn't the 301 redirect alert Google that the old style URL is toast and it's gone?
White Hat / Black Hat SEO | | CFSSEO0 -
301 Redirect ASP code
Hi I have a script detailed below, that 301 redirects based upon different queries --- """"<%if (Request("offset") = "") Then%> <% if Request("keywords") = "" AND Request("s") <> "" AND Request("j") <> "" then'Sector and Location NOT NULL%> <% if (Request.ServerVariables("HTTP_X_REQUEST_URI")) <> "/" & LCase(SEOFriend(replaces.Fields.Item("JBCategoryLabel"))) & "-jobs-in-" & LCase(SEOFriend(replacej.Fields.Item("JBLocation"))) Then Response.Status="301 Moved Permanently" Response.AddHeader "Location", "/" & LCase(SEOFriend(replaces.Fields.Item("JBCategoryLabel"))) & "-jobs-in-" & LCase(SEOFriend(replacej.Fields.Item("JBLocation"))) Response.End End If %> <%End if%> <% if Request("keywords") = "" AND Request("s") <> "" AND Request("j") = "" then'Sector NOT NULL and Location NULL %> <% if (Request.ServerVariables("HTTP_X_REQUEST_URI")) <> "/" & LCase(SEOFriend(replaces.Fields.Item("JBCategoryLabel"))) & "-jobs-in-" & LCase(SEOFriend(SiteDetails.Fields.Item("JBSRegion"))) Then Response.Status="301 Moved Permanently" Response.AddHeader "Location", "/" & LCase(SEOFriend(replaces.Fields.Item("JBCategoryLabel"))) & "-jobs-in-" & LCase(SEOFriend(SiteDetails.Fields.Item("JBSRegion"))) Response.End End If %> <%End if%> <% if Request("keywords") = "" AND Request("s") = "" AND Request("j") <> "" then'Sector NULL and Location NOT NULL %> <% if (Request.ServerVariables("HTTP_X_REQUEST_URI")) <> "/jobs-in-" & LCase(SEOFriend(replacej.Fields.Item("JBLocation"))) Then Response.Status="301 Moved Permanently" Response.AddHeader "Location", "/jobs-in-" & LCase(SEOFriend(replacej.Fields.Item("JBLocation"))) Response.End End If %> <%End if%> <%End if%>"""" But this still allows for both the www and non www versions of these pages to render in the browser, which is resulting in duplicate content. On my home page I use -- <% If InStr(Request.ServerVariables("SERVER_NAME"),"www") = 0 Then Response.Status="301 Moved Permanently" Response.AddHeader "Location","http://www." & Request.ServerVariables("HTTP_HOST") & "/" Response.End End if %> `Is there a good way to combine these, so that I still get all of the rules of the first script whilst also redirecting any non www versions to the www version? in other words
White Hat / Black Hat SEO | | TwoPints
domain.com/jobs-in-" & LCase(SEOFriend(replacej.Fields.Item("JBLocation")))
Eould redirect to
www.domain.com/jobs-in-" & LCase(SEOFriend(replacej.Fields.Item("JBLocation"))) Thanks in advance`0 -
Why Proved Spammers are on 1st Google SERP's Results
This question is related exclusively to few proved spammers who have gained 1st Google search results for specific terms in the Greek market, targeting Greek audience. Why he looks spammer and very suspicious? For instance, the site epipla-sofa.gr, sofa.gr, fasthosting.gr and greekinternetmarketing.com look suspicious regarding their building link activities: 1. suspicious spiky link growth 2. several links from unrelated content (unrelated blog posts forom other markets, paid links, hidden links) 3. excessive amount of suspicious link placements (forum profiles, blog posts, footer and sidebar links) 4. Greek anchor text with the keyword within articles written in foreign languages (total spam) 5. Unnatural anchor text distribution (too many repetitions) So the main question is: Why Google is unable to recognize/trace some of these (or even all) obvious spamming tactics and still these spammy sites as shwon below reside on the 1st Google.gr SERPs. Examples of spam sites according to their link building history: www.greekinternetmarketing.com www.epipla-sofa.gr www.fasthosting.gr www.sofa.gr All their links look very similar. They use probably software to build links, or even hack authority sites and leave hidden links (really dont know how they could do that). Could you please explain or share similar issues? Have you ever found any similar cases in your industry, and how did you tackle it? We would appreciate your immediate attention to this matter. Regards, George
White Hat / Black Hat SEO | | Clickwisegr0 -
What happened with Hayneedle's rankings?
Hayneedle is an e-commerce company that operates 200 niche sites selling indoor and outdoor home products. They were ranking at the top of the first page for most terms related to their sites (fire pits, fountains, benches, etc.), but all of a sudden at the end of April they lost their rankings, getting dropped to page 4 or lower for tons of their sites (barstools.com, patiofurnitureusa.com, adirondackchairs.com, benches.com, etc.). Does anybody know what caused this? Other than one thread on an SEO forum, we haven't been able to find any discussion about it online. It seems like cross-linking between the sites could have been a problem here, but we'd love to hear thoughts from the experts here on this. Our company is using the same business model of one brand with niche sites and we want to avoid anything like this happening to us.
White Hat / Black Hat SEO | | outdoorliving0 -
Difference between Syndication, Autoblogging, and Article Marketing
Rands slide deck titled 10 Steps to Effective SEO & Rankings from InfusionCon2011 on slide 82 recommends content syndication as a method for building traffic and links. How is this any different than article marketing? He gave an example of this using a screenshot of this search result for "headsmacking tip discussion." All of those sites that have republished SEOmoz's content are essentially autoblogs that post ONLY content generated by other people for the purpose of generating ad clicks from their organic traffic. We know that Google has clearly taken a position against these types of sites that offer no value. We hear Matt Cutts say to stay away from article marketing because you're just creating lots of duplicate content. Seems to me that "syndication" is just another form of article marketing that spreads duplicate content throughout the web. Can someone help me understand the difference? By the way, the most interesting one I saw in those results was the syndicated article on businessweek.com!.
White Hat / Black Hat SEO | | summitseo0