Defining Canonical First and Later No Indexing
-
We found some repetitive pages on site which has mostly sort or filter parameters, tried lot to remove them but nothing much improvement
Is it correct way that:-
a) We are creating new pages altogther of that section and putting up rel canonical tag from old ones to new ones
b) Now, after canonical declared, we will noindex the old pages
Is it a correct way to let new pages supercede the old pages with new pages.
-
Happy Monday to you!
I agree with Mike - you need to use the 301 redirect to point from the old pages to the new pages.
If you are reworking the site, and have to use parameters, consider dropping the parameters in a hash - this hides them to the bots and you get full SEO benefit for links
Credit Rand for this excellent walk through - http://moz.com/blog/whiteboard-friday-using-the-hash
There are other ways to deal with parameters and re-sorts of a result page, but it depends on your situation. Bottom line, if you are going through the effort of a site restructure, don't set yourself up to end up with the same problem you have now. Figure out what your "Golden URLs" are for categories and products around key words and then find a way to "hide" all the other versions of those same pages (in this example you want to hide all the re-sorted and search result pages) from Google. This is why I would often use a "no follow, no index" meta tag on a page vs a canonical. Do not waste GoogleBot's time crawling a bunch of pages that you are not wanting to rank anyway. Setup the structure so the crawl is clear and focused on the pages that are the most important.
Cheers!
-
-
If you have older content and you create newer relevant content that you want people to see instead of the older content, you likely want a 301 redirect. In this way, all (mostly all) of the link equity is passed to the newer content which will eventually rank in place of the older content.
-
If you have duplicate pages like those caused by a parameter where site.com/page1 is the same as site.com/page1?this=x then you should canonicalize the page and its parameters to site.com/page1. In this way, the search engines understand that page1 is the real version of the content and thanks to the canonical will eventually take the place of the parametered versions that had been appearing in the SERPs.
Appendix to 1... down the road, those older pages that were redirect may wind up with no more links pointing to them from anywhere and no traffic going to them. At this point you may consider just 404ing the older page if you'd like to clean up older, less useful redirects.
Appendix to 2... A Canonical is a suggestion, not a directive. This means that the search engines do not have to follow it if they feel it is not entirely relevant.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Staging website got indexed by google
Our staging website got indexed by google and now MOZ is showing all inbound links from staging site, how should i remove those links and make it no index. Note- we already added Meta NOINDEX in head tag
Intermediate & Advanced SEO | | Asmi-Ta0 -
Page with metatag noindex is STILL being indexed?!
Hi Mozers, There are over 200 pages from our site that have a meta tag "noindex" but are STILL being indexed. What else can I do to remove them from the Index?
Intermediate & Advanced SEO | | yaelslater0 -
Silo Architecture and Mobile First
This goes to the age-old SEO argument - how many links in the navigation. We are a well-known brick and mortar brand We have 20,000 SKUs and over 500 categories and sub-catetgories. 95%+ of our backlinks go to the home page. We don't have a blog, but it's in the works. Our site is not responsive. It serves up different versions based on device type, but is not an "M Dot". Our rankings are pretty strong in spite of a large number of technical SEO issues (different discussion). Currently, our e-commerce desktop site is "Siloed" (I'm new to the company - I didn't do it). The home page links via the top nav to categories. The category pages link to subcategories via sidebar navigation, or via images on the category pages (instead of product images). It's pretty close to textbook silos, and it's very near how I would have designed it. This silo architecture passes the most link juice to our categories which target our highest search volume (head) terms. The categories pass link juice (albeit significantly less) to our subcats which target secondary terms. In terms of search volume and commercial value, our tiers line up very neatly. On average, the targeted subcat terms get about 1/6 of the volume of our head terms. The Silo concept has been around forever, and is evangelized by Bruce Clay and other respected SEOs. Every time I've siloed an ecommerce site, the rankings improve dramatically, so who am I to argue? So, what's the problem? Read on... Our mobile navigation, on the other hand, links to every category and subcategory via flyout navigation (I didn't do this, either). In theory, this distributes an equal amount of link juice to all categories and subcategories. It robs link juice from our categories and passes it to subcategories. Right now, this isn't a problem. Rankings are based on the desktop site, and minor adjustments are made for mobile rankings. When Mobile First rolls out, our mobile nav will be the default navigation for Google, and in theory, link juice distribution across the site will change radically, and potentially harm our rankings for our head terms. I always study site architecture for a number of respected ecommerce sites. Target and Walmart, for example, link to every category and subcategory through their mobile and desktop navigation. Wayfair takes a silo approach on mobile and desktop, linking in tiers. I would argue that Walmart and Target have so much DA/TF/CF that they don't give a damn about targeted link juice distribution - it's all about UX. Wayfair's backlink profile is strong, but it's not Walmart or Target, so they need to be concerned about link juice distribution - hence the silo approach. Have the Google spokespeople said anything about this? I see this as a potential landmine across the industry. Is this something I should be concerned about? Has anyone had any experience with de-siloing a website? Am I making a big deal out of a non-issue? Please - no arguments about usability. UX is absolutely part of the equation. Usability is a ranking factor, but if our rankings and traffic take a nose dive, UX isn't going to matter. This is a theoretical discussion discussion on link juice distribution, and I know that compromises need to be made between SEO and UX.
Intermediate & Advanced SEO | | Satans_Apprentice0 -
Website dropped out from Google index
Howdy, fellow mozzers. I got approached by my friend - their website is https://www.hauteheadquarters.com She is saying that they dropped from google index over night - and, as you can see if you google their name, website url or even site: , most of the pages are not indexed. Home page is nowhere to be found - that's for sure. I know that they were indexed before. Google webmaster tools don't have any manual actions (at least yet). No sudden changes in content or backlink profile. robots.txt has some weird rule - disallow everything for EtaoSpider. I don't know if google would listen to that - robots checker in GWT says it's all good. Any ideas why that happen? Any ideas what I should check? P.S. Just noticed in GWT there was a huge drop in indexed pages within first week of August. Still no idea why though. P.P.S. Just noticed that there is noindex x-robots-tag in headers... Anyone knows where this can be set?
Intermediate & Advanced SEO | | DmitriiK0 -
Old pages still in index
Hi Guys, I've been working on a E-commerce site for a while now. Let me sum it up : February new site is launched Due to lack of resources we started 301's of old url's in March Added rel=canonical end of May because of huge index numbers (developers forgot!!) Added noindex and robots.txt on at least 1000 urls. Index numbers went down from 105.000 tot 55.000 for now, see screenshot (actual number in sitemap is 13.000) Now when i do site:domain.com there are still old url's in the index while there is a 301 on the url since March! I know this can take a while but I wonder how I can speed this up or am doing something wrong. Hope anyone can help because I simply don't know how the old url's can still be in the index. 4cArHPH.png
Intermediate & Advanced SEO | | ssiebn70 -
Page Indexed but not Cached
A section of pages on my site are indexed (I know because they appear in SERPs if I copy and paste a sentence from the content), however according to the text-only cached version of the page they are not being read by Google.Why are they indexed event hough it seems like Google is not reading them..... or is Google in fact reading this text even though it seems like they should not be?Thanks for your assistance.
Intermediate & Advanced SEO | | theLotter0 -
Canonical URL Tag Usage
Hi there, I have a .co.uk website and a .ie website, which have the exact same content on both, should I put a canonical tag on both websites, on every page? Kind Regards
Intermediate & Advanced SEO | | Paul780 -
Rel=canonical tag on original page?
Afternoon All,
Intermediate & Advanced SEO | | Jellyfish-Agency
We are using Concrete5 as our CMS system, we are due to change but for the moment we have to play with what we have got. Part of the C5 system allows us to attribute our main page into other categories, via a page alaiser add-on. But what it also does is create several url paths and duplicate pages depending on how many times we take the original page and reference it in other categories. We have tried C5 canonical/SEO add-on's but they all seem to fall short. We have tried to address this issue in the most efficient way possible by using the rel=canonical tag. The only issue is the limitations of our cms system. We add the canonical tag to the original page header and this will automatically place this tag on all the duplicate pages and in turn fix the problem of duplicate content. The only problem is the canonical tag is on the original page as well, but it is referencing itself, effectively creating a tagging circle. Does anyone foresee a problem with the canonical tag being on the original page but in turn referencing itself? What we have done is try to simplify our duplicate content issues. We have over 2500 duplicate page issues because of this aliasing add-on and want to automate the canonical tag addition, rather than go to each individual page and manually add this tag, so the original reference page can remain the original. We have implemented this tag on one page at the moment with 9 duplicate pages/url's and are monitoring, but was curious if people had experienced this before or had any thoughts?0