Canonical needed after no index
-
Hi do you need to point canonical from a subpage to main page if you have already marked a no index on the subpage, like when google is not indexing it so do we need canonicals now as is it passing any juice?
-
Thanks Alan
-
I tried also could not find it.
but here is a quote from Matt Cutts
"Eric Enge: Can a NoIndex page accumulate PageRank?Matt Cutts: A NoIndex page can accumulate PageRank, because the links are still followed outwards from a NoIndex page.
Eric Enge: So, it can accumulate and pass PageRank.
Matt Cutts: Right, and it will still accumulate PageRank, but it won't be showing in our Index. So, I wouldn't make a NoIndex page that itself is a dead end. You can make a NoIndex page that has links to lots of other pages.
For example you might want to have a master Sitemap page and for whatever reason NoIndex that, but then have links to all your sub Sitemaps.
Eric Enge: Another example is if you have pages on a site with content that from a user point of view you recognize that it's valuable to have the page, but you feel that is too duplicative of content on another page on the site
That page might still get links, but you don't want it in the Index and you want the crawler to follow the paths into the rest of the site.
Matt Cutts: That's right. Another good example is, maybe you have a login page, and everybody ends up linking to that login page. That provides very little content value, so you could NoIndex that page, but then the outgoing links would still have PageRank.
Now, if you want to you can also add a NoFollow metatag, and that will say don't show this page at all in Google's Index, and don't follow any outgoing links, and no PageRank flows from that page. We really think of these things as trying to provide as many opportunities as possible to sculpt where you want your PageRank to flow, or where you want Googlebot to spend more time and attention."
http://www.stonetemple.com/articles/interview-matt-cutts.shtml
-
Hey Alan
I tried looking for that but returned empty handed. Any chance you can post a link to that if you come across that video again. Much appreciated
-
Where did you hear this
Matt Cutts as I remember stated that that link juice will flow thought if you use a follow, if I remember correctly it was in a interview with Rand on SEOMOZ
-
the meta tag of follow will not pass any link juice!!! It is only an instruction for bots to crawl the pages from the links on the page.
Please see the answer below
-
If its not in the index, then a canonical will have no value.
I wold no no index any page unless you have a very good reason, if I had to I would use meta tag, noindex,follow so that any link juice pointing to the page will be returned
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If you use canonicals do the meta descriptions need to be different?
For example, we have 3 different subsites with the same pages. We will put canonicals so they reference the main pages. Do the meta descriptions have to be different for each of the three pages? How does Google handle meta data when using canonicals?
Technical SEO | | Shirley.Fenlason0 -
Delete indexed spam pages
Hi everyone, I'm hoping someone had this same situation, or may know of a solution. One of our sites was recently pharmahacked 😞 We found an entire pharmaceutical site in one of the folder of our site. We were able to delete it, but now Google is showing us on not found error for those pages we deleted. First, I guess the question is will this harm us? If so, anyway we can fix this? Obliviously we don't want to do a 303 redirect for spam pages. Thanks!
Technical SEO | | Bridge_Education_Group0 -
Removing a staging area/dev area thats been indexed via GWT (since wasnt hidden) from the index
Hi, If you set up a brand new GWT account for a subdomain, where the dev area is located (separate from the main GWT account for the main live site) and remove all pages via the remove tool (by leaving the page field blank) will this definately not risk hurting/removing the main site (since the new subdomain specific gwt account doesn't apply to the main site in any way) ?? I have a new client who's dev area has been indexed, dev team has now prevented crawling of this subdomain but the 'the stable door was shut after the horse had already bolted' and the subdomains pages are on G's index so we need to remove the entire subdomain development area asap. So we are going to do this via the remove tool in a subdomain specific new gwt account, but I just want to triple check this wont accidentally get main site removed too ?? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Need Advice: How should we handle this situation?
Hi Folks, We have a blog post on one of our sites that ranked very highly for lucrative term for about a period of two months. It had over 2000 Facebook likes, about 20 tweets and the same amount of Google +1's. The post ended up receiving several high quality natural links, and we also pointed a few authoritative links to it from our network of sites. After we saw the ranking starting to slip we did a bit of link building (which we shouldn't have done) and ended up making a big mistake. The link building company was only supposed to do 30 links and they ended up doing 600. Once we figured it out, we immediately submitted a disavow request and told Google about our mistake. I also thought maybe we then had a manual spam penalty applied so I also submitted a reconsideration request (and also told them about our mistake) but got back a canned reply saying "no manual penalties" were found. After we did all that, we saw the rankings fall out of the top 50 with the next 10 days. I'm confident we can throw up a new similar blog post and see close the same rankings we experienced with the original post. But before I do that, I have two questions: Should we 301 the old post to the new post? Could that some how "pass" the bad rankings along to the new post? What should we do about the natural links we received? Should we try and reach out to the sites and get them to change their links to the new post? Any help would be appreciated. Thanks!
Technical SEO | | shawn810 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Index inactive mobile site?
Hi, I have a question wrt Mobile version of a site. Previously, we had a mobile site which is no longer active and there are possibilities of resurrecting it in future, so we have a 302 redirect which points to the homepage (desktop version). Currently, the mobile site is indexed by the search engines. To avoid the duplicate content issue, is it recommended to use robots.txt and block the spiders from mobile content or apply 301 redirect until the mobile site is up and running, OR continue with the 302 redirect. Any suggestions will be helpful. Thanks,
Technical SEO | | RaksG0 -
How to Find all the Pages Index by Google?
I'm planning on moving my online store, http://www.filtrationmontreal.com/ to a new platform, http://www.corecommerce.com/ To reduce the SEO impact, I want to redirect 301 all the pages index by Google to the new page I will create in the new platform. I will keep the same domaine name, but all the URL will be customize on the new platform for better SEO. Also, is there a way or tool to create CSV file from those page index. Can Webmaster tool help? You can read my question about this subject here, http://www.seomoz.org/q/impacts-on-moving-online-store-to-new-platform Thank you, BigBlaze
Technical SEO | | BigBlaze2050 -
Canonical tags and relative paths
Hi, I'm seeing a problem with Roger Bot crawling a clients site. In a campaign I am seeing you say that the canonical tag is pointing to a different URL. The tag is as follows:- /~/Standards-and....etc Google say:- relative paths are recognized as expected with the tag. Also, if you include a <base> link in your document, relative paths will resolve according to the base URL Is the issue with this, that there is a /~/, that there is no <base> link or just an issue with Roger? Best regards, Peter
Technical SEO | | peeveezee0