I don't think the risk of harm, done right, is high, but: (1) it's easy to do wrong, and (2) I suspect the benefits are small at best. I think your time/money is better spent elsewhere.
Posts made by Dr-Pete
-
RE: Infinite Scrolling: how to index all pictures
-
RE: Infinite Scrolling: how to index all pictures
By assigning a URL to each virtual "page", you allow Google to crawl the images, done correctly. What Google is suggesting is that you then set up rel=prev/next between those pages. This tells them to treat all of the image URLs as a paginated series (like a mutli-page article or search results).
My enterprise SEO friends have mixed feelings about rel=prev/next. The evidence of it's effectiveness is limited, but what it's supposed to do is allowing the individual pages (images, in this case) to rank while not looking like duplicate or near-duplicate content. The other options would be to rel=canonical these virtual pages, but then you'd essentially take the additional images out of ranking contention.
This infinite scroll + pagination approach is VERY technical and the implementation is well beyond Q&A's scope (it would take fairly in-depth knowledge of your site). Honestly, my gut reaction is that the time spent wouldn't be worth the gain. Most users won't know to scroll, and having 10-20 pictures vs. just a few may not add that much value. The SEO impact would be relatively small, I suspect. I think there may be easier solutions that would achieve 90% of your goals with a lot less complexity.
-
RE: Infinite Scrolling: how to index all pictures
There should be no real difference, in terms of Google's infinite scroll solution. If you can chunk the content into pages with corresponding URLs, you can put any source code on those pages - text and/or images, along with corresponding alt text, etc. Once you've got one solution implemented, it should work for any kind of HTML. Not sure why images would be different in this case.
There are also ways to create photo galleries that can be crawled, mostly using AJAX. It's complex, but here's one example/discussion:
-
RE: NEw domain extensions, are they worth it seo wise?
Unfortunately, no - still pretty limited on travel - if I take a trip, it's usually to the Moz office. Speaking at MozCon in July and then in the Czech Republic in November.
-
RE: NEw domain extensions, are they worth it seo wise?
Google has been non-committal on this, other than to say the new TLDs won't get any special preference (which is a bit vague). We don't really know yet if those domain keywords will provide SEO benefit. I think most of these will be treated generically, and the keyword in the domain may carry limited benefits.
Personally, if you have a choice between a lousy domain on a traditional extension and a really memorable domain on a new extension, I might lean toward the new extension. I'm talking about homebuilder.construction vs. great-homebuilder-construction-company.org or something like that.
There's the usability aspect, too - I think it's going to take people a while to adjust. If you owned chicago.attorney, people might pick up on that, but they're still used to thinking in terms of .com, etc. There's going to be an adjustment period.
If the price is right and there's a good one out there, it may be worth buying, but I don't think there's going to be much of a gold rush on these new domains.
-
RE: NEw domain extensions, are they worth it seo wise?
Thomas is generally correct here, although Google has since begun treating .co as a "generic" TLD, which is to say they no longer geo-locate it to Colombia. See this reference:
https://support.google.com/webmasters/answer/1347922?hl=en
So, the Colombia association won't hurt, but it won't be geographically connected to Colorado, either. There is some chance that you could pick up the "Co" on a keyword match, if someone searched "Denver, Co" and you owned "denver.co", for example. That's speculation on my part, though. I certainly wouldn't count on any benefit.
-
RE: Infinite Scrolling: how to index all pictures
Keep in mind that just adding 20 images/videos to this page isn't going to automatically increase the quality. Images have limited Google can crawl, and unless they're unique images that you own, they'll potentially be duplicated across the web. If adding those 20 images slows down the page a lot, that could actually harm your SEO and usability.
-
RE: Infinite Scrolling: how to index all pictures
Unfortunately, it depends entirely on your implementation, but the short answer is that it depends if the images are loaded all at once and only displayed by scrolling or if they're loaded as you scroll. The latter is essentially what "infinite scrolling is" - it's generally not actually infinite, but scrolling will cause load events until there's nothing left to load.
The key is that the content has to be crawlable somehow and can't only be triggered by the event, or Google won't see it. So, if you're going to load as you go, the infinite scrolling posts should apply. If the images are pre-loaded, then you shouldn't have a problem, but I'd have to understand the implementation better.
-
RE: Advice needed: Google crawling for single page applicartions with java script
Not an expert on JS by any stretch, but Richard Baxter at SEO Gadget suggested this post:
http://seogadget.com/javascript-framework-seo/
It's focused on Angular JS, but a lot of the core principles should apply more broadly.
-
RE: Rel="canonical" in hyperlink
Yeah, I'd have to agree that this is not a sanctioned use of rel="canonical". Most likely, it will do nothing at all. I doubt it would harm your site, but it's not accomplishing anything. Google is even pretty picky about placement of the tag - for example, it doesn't seem to work in the body of a page. I ran some experiments with that a couple of years ago.
-
RE: New Google SERPs page title lengths, 60 characters?
If you haven't yet, please see my follow-up post:
http://moz.com/blog/new-title-tag-guidelines-preview-tool
This is a moving target, and it's actually a pixel width (512px), but I tried to take a data-driven approach, and as best I can measure, 55 characters is a safe limit about 95% of the time.
I will add that Google definitely processes characters beyond that limit (some are even in the source code) and words beyond that limit could count toward ranking. They won't count much, I strongly suspect, but this new limit doesn't mean you automatically have to cut everything shorter. There's certainly no penalty for going over, as long as you're not keyword-stuffing to extremes.
One down side is that the new method (using CSS for the cut-off) means that Google now cuts mid-word, instead of between words. This could be more detrimental to CTR, in my opinion. It's very situational, though. The best I can say is to look at your most important title tags in the context of real searches and make your own judgment call.
-
RE: SERP display switching between normal meta description and 15+ items
I wish I had a good answer. When Google takes liberties with snippets, it's not always easy to sort out why. The first stop is usually making sure the query is in the META description and the description isn't too long, and you've got both of those covered. Two long shots, but easy to try:
(1) Add the NOODP Meta Tag (). Sometimes, it prevents not only descriptions from the Open Directory Project, but also Google taking liberties. Sometimes.
(2) Consider pulling your domain name out of the META description. You're saying "Myrtle Beach Hotels" twice in both the title and META descriptions, and it's possible that that looks very slightly spammy to Google. If you can, make the description slightly more natural - the length is about right.
Again, these are in no way guaranteed, but they're easy to try. I don't think tweaking the structure of the page is going to help. It's not Google recognizing the search results that's a problem - it's why they choose to rewrite. Unfortunately, we don't have a good handle on the why, at least in some cases.
-
RE: Canonical Expert question!
Do "/houses" and "/houses?page=1" have exactly the same content? I'd definitely want to see rel=canonical on the "page=1" version - those are just duplicates. Google has expressly said that they don't want you to canonical pages 2, 3, etc. back to page 1. That doesn't mean it never works, just that it's a bit dicey.
As Chris said, rel=prev/next is another option. Theoretically, it would allow all of the results pages to rank, but let Google know they're a series and not count them against you as thin content. In practice, even my enterprise SEO colleagues have mixed feelings. There's just very limited evidence regarding how effective it is. It is low-risk.
The other option is to go a bit more old-school and META NOINDEX anything with "page=", and just let the original version get indexed and rank. This can help prevent any dilution and would also solve your "page=1" issue. The biggest risk here is if that cut off PR flow across your site or if you had links to the paginated results. In most cases, that's unlikely (people don't link to or tweet page 17 of your search results), but it's a case-by-case thing.
Unfortunately, the "best" solution can be very situational, and even Google isn't very clear about it.
-
RE: Wordpress photo blog with sparse text - noindex posts, index categories?
I tend to agree with Andrew that blocking all 1,000 pages really removes you from a lot of potential ranking opportunities, but two issues come into play:
(1) You may be stuck behind the safe-search wall, which can really diminish the ability of those pages to rank.
(2) The existence of these photos on other sites is definitely going to increase the chances of something like a Panda penalty, or, at the very least, aggressive filtering of that content.
Long-term, I think what Andrew said about making the content richer is critical - you're going to have to provide a clear value-add that Google can see. In the mean-time, it doesn't have to be all-or-none. Maybe instead of opening up all 1,000 photos to ranking you could open up just your most popular category, try to beef up that content, and see how it goes?
-
RE: New Domain Vs. Existing Domain
I'm going to disagree a bit with the other commenters (respectfully) and say that - it depends. First off, you said it's an algorithmic penalty, and that can really follow a wide range of timelines. Let's say you got hit by a Penguin update - you'd have to wait until the next data update, even if you do everything right, which can take months. I think "weeks" is very situational and may be optimistic.
The other big factor to consider - what does your link profile look like outside of this? If you have 1,500 toxic links from a handful of domains, and they're part of a profile with 150,000 natural links from thousands of decent-to-good domains, then definitely don't write this domain off. You've got a ton of assets to lose, and you can't just 301-redirect your way out of this (you'd have to start over). You'd also be losing social, direct traffic, and potentially a lot of other things.
On the other hand, let's say you have 1,550 links, and 1,500 of them are toxic. At this point, Google's view of your site may be so dim that, at best, you'll take weeks to get the disavow processed and then effectively be left at zero (or possibly -1). If that's the case, then I think starting over is a much different equation and possibly even faster.
It also depends a lot on the strength of your domain and your other branding efforts. Changing names isn't something to take lightly, if you've built a name. On the other hand, if you've slapped up a partial-match domain (no offense intended) and part of your problem is that you've built keyword-loaded anchor text around that PMD, then cutting the domain loose could actually help you.
This isn't a decision any of us can or should make for you in Q&A, honestly.
-
RE: When do i use disallow links in WMT?
So, these are sites that scraped your post and then linked back to it? If that's the case, the links are good, in a sense - they help Google remove the duplicates. I'm not sure what you mean by "there are 2-3 always".
What does your link profile look like outside of this. If there are 66 links like this, and these are the only 66 links you have, it's possible you could be at risk. If these are 66 out of 6,000 links, then I probably wouldn't worry about it, especially if they're not paid links or somehow engineered (part of a link network, etc.).
-
RE: Knowledge Graph
Yeah, this is a critical point - you have to pin down where the old entry came from, to sort out if this is a G+ issue or a data issue in something like Freebase. Keep in mind, too, that knowledge panels can come and go, and may depend on things like your brand authority.
-
RE: Meta Description Being Picked up from another site!?
Yeah, I'd just go easy on directory links going forward. There's no clear sign you need to start cutting. I was just observing the pattern, for the most part.
-
RE: Meta Description Being Picked up from another site!?
I'm only speculating, because directories are the only place I seem to be able to find the text Google is using. It begs the question, though - if directories are all picking up that text, is it possible they're getting it somewhere? For example, your Google+ business listing?
I've never seen a description pulled from another site outside of DMOZ/ODP. I've seen rewrites by Google, but usually using content somewhere on the page. So, this is definitely odd.
You could try the NOODP meta directive. I have heard cases where it helps prevent rewrites (even if the source isn't the ODP).
-
RE: Google sets brand/domain name at the end of SERP titles
Unfortunately, there's very little you can do to stop Google from rewriting titles. In some cases, if a title is too long or poorly matches frequent queries, tweaking it can help, but that's often not the case with them adding your brand name. I'm with Bill - I'd try to pin down if Google is pulling this from another source. If it's just coming from your domain, though, there may not be much you can do. There's no directive to tell them to stop rewriting, unfortunately.
-
RE: Meta Description Being Picked up from another site!?
Huh... yeah, that text doesn't seem to appear anywhere on your site, and it's not on DMOZ, that I can find. You do have some directory links to your site using that description in the profile:
http://www.bestofyorkshire.com/category/services/
...but I've never seen Google go looking that far. Your META description is honestly a bit keyword loaded (it just looks like a string of keywords with commas to me) - if you made it more targeted and included your main keywords naturally, Google would be more likely to use it as is. For some reason, they're just not wanting to pick up your main copy.
You've got a mountain of directory links that all seem to be using some mix of your home-page copy, too. It could be that these are starting to look like near-duplicates and are devaluing your home-page content somehow. I'd diversify some of that and lay off the directories a bit, personally. That's probably not the primary cause, though.
-
RE: Why are bit.ly links being indexed and ranked by Google?
Given that Chrome and most header checkers (even older ones) are processing the 301s, I don't think a minor header difference would throw off Google's crawlers. They have to handle a lot.
I suspect it's more likely that either:
(a) There was a technical problem the last time they crawled (which would be impossible to see now, if it had been fixed).
(b) Some other signal is overwhelming or negating the 301 - such as massive direct links, canonicals, social, etc. That can be hard to measure.
I don't think it's worth getting hung up on the particulars of Bit.ly's index. I suspect many of these issues are unique to them. I also expect problems will expand with scale. What works for hundreds of pages may not work for millions, and Google isn't always great at massive-scale redirects.
-
RE: Why are bit.ly links being indexed and ranked by Google?
I was getting 301->403 on SEO Book's header checker (http://tools.seobook.com/server-header-checker/), but I'm not seeing it on some other tools. Not worth getting hung up on, since it's 1 in 70M.
-
RE: Why are bit.ly links being indexed and ranked by Google?
I show the second one (bit.ly/O6QkSI) redirecting to a 403.
Unfortunately, these are only anecdotes, and there's almost no way we could analyze the pattern across 70M indexed pages without a massive audit (and Bitly's cooperation). I don't see anything inherently wrong with their setup, and if you noticed that big of a jump (10M - 70M), it's definitely possible that something temporarily went wrong. In that case, it could take months for Google to clear out the index.
-
RE: Why are bit.ly links being indexed and ranked by Google?
One of those 301s to a 403, which is probably thwarting Google, but the other two seem like standard pages. Honestly, it's tough to do anything but speculate. It may be that so many people are linking to or sharing the short version that Google is choosing to ignore the redirect for ranking purposes (they don't honor signals as often as we like to think). It could simply be that some of them are fairly freshly created and haven't been processed correctly yet. It could be that these URLs got indexed when the target page was having problems (bad headers, down-time, etc.), and Google hasn't recrawled and refreshed those URLs.
I noticed that a lot of our "mz.cm" URLs (Moz's Bitly-powered short domain) seem to be indexed. In our case, it looks like we're chaining two 301s (because we made the domain move last year). It may be that something as small as that chain could throw off the crawlers, especially for links that aren't recrawled very often. I suspect that shortener URLs often get a big burst of activity and crawls early on (since that's the nature of social sharing) but then don't get refreshed very often.
Ultimately, on the scale of Bit.ly, a lot can happen. It may be that 70M URLs is barely a drop in the bucket for Bit.ly as well.
-
RE: Does Google throttle back the search performance of a penalised website/page after the penalty has been removed?
If reconsideration worked and you got a reply from Google, it's likely that you were facing a manual penalty (either instead of or in addition to Penguin). So, it may be that Penguin or some other algorithmic penalty is in play (echoing what Andy said).
Once a penalty expires or is lifted, I'm unaware of any kind of dampening on the site (like, 50% penalty for 3 months and then 25%, etc.). This is much more likely to be a situation where you have multiple layers of problems (some could be technical, etc., and not penalties) and you've removed just the top layer.
-
RE: Why are bit.ly links being indexed and ranked by Google?
It looks like bit.ly is chaining two 301s: the first one goes to feedproxy.google.com (FeedProxy is like AdSense for feeds, I think), and then the second 301 goes to the destination site. I suspect this intermediary may be part of the problem.
-
RE: Using Canonical Attribute
Just want to add one comment. Where people end up in trouble is when they apply the canonical tag too broadly (to non-duplicates). This tends to happen when you have a CMS and one template drives multiple pages. So, let's say that all of your product pages are created by:
http://example.com/product.php
...and you just add IDs to that to create a product, like:
http://example.com/product.php?id=123
If you add a canonical tag to "product.php" pointing to a single product, you would essentially tell Google to canonicalize every product page on your site to just that one product. This is because that one physical file impacts hundreds of URLs. So, in that case, you would have to make sure the code logic was in place to apply the proper ID.
-
RE: Alternative Link Detox tools?
Agreed - it's not much fun, but every reputable link auditor I know uses multiple available sources. All of the tools (including our own at Moz) have different biases, and when you're trying to get a complete a list as possible, you need to use as many sources as you can.
I would highly recommend against going too automated - the cost "savings" short-term could be lost quickly if you start cutting potentially good links. It really depends on your current risk/reward profile. If you're already hit hard with a penalty, then cutting deep and fast may be a good bet (and automation would be more effective). If you're being proactive to prevent future issues, then relying too much on automation could be very dangerous.
-
RE: Canonical
Honestly, it depends a lot on the business case. In many cases, consolidating to one site has advantages, but there was some reason you split the sites, and I don't know that history. So, it can be tough to say whether you should abandon one site.
Certainly, if you do, and if you 301-redirect those pages from the abandoned site (which you should, unless that site was penalized), then that content on the stronger site should do well.
-
RE: PR2 vs Page Authority 65
With this type of discrepancy, that's often the case. Either the site has been penalized or the linking sites have been devalued in some way (say, a link network). DA/PA basically model the raw strength of the link profile, but they don't account for some quality factors.
Of course, that assumes the toolbar PR is up-to-date and accurate, which is a fairly big assumption these days, IMO.
-
RE: Is There Google 6th page penalty?
If that's your site, I'd at least nofollow it, and probably remove it. Not sure you should disavow your own site, though.
LinkDetox and similar tools can help you spot these links, but I'd really recommend going through them by hand. You're probably going to have to kill off or at least disavow some of the aggressive exact-match anchor text while you're building more natural links.
-
RE: Is There Google 6th page penalty?
This looks, at first glance, more like a penalty than a technical issue. Google is indexing 20K+ pages on the site, including your home-page, and your home-page is ranking for exact-match title phrases (and even partial title phrases). When you hit the "money" term, though, that's when you've been knocked down the list.
You do seem to be pushing exact-match anchor text very hard, and on some links that are probably questionable quality. For example, there are header/footer links to "İddaa" on this site:
These clearly have no relevance, and probably even look like paid links to Google. They may be causing penalties in general, or they may be causing term-specific penalties (related to the anchor text). Given that the site seems to be in the gambling industry, Google is going to be even more suspicious.
-
RE: Is There Google 6th page penalty?
Sorry, do you mean that your page 1 rankings fell to page 6. There used to be a "-50 penalty", which was pretty severe (doesn't get much worse beyond all-out banning). There are many other possible explanations, though, including technical ones.
If every keyword (or a lot of them) dropped from page 1 to page 6, it's possible that you're facing a link-based penalty. That would be a strange coincidence. If it's a technical problem, you'd more likely see certain phrases or keywords drop out entirely. Again, I'm speaking generally.
-
RE: Sub-pages have no pa
Unfortunately, there's no quick fix for reversing canonicals. If Google is indexing the pages, it's probably fine - I'd double check them with a "site:" operator and see if they're showing up correct (titles, snippets, ranking for exact-match terms, etc.). In some cases, I recommend adding self-referencing rel-canonicals (to counteract the old ones) and it never hurts to have a good XML sitemap in place in GWT. Again, though, you said you're getting indexed, so it may be nothing.
If you want to Private Message me or contact support, we can try to sort out why we're still not crawling the other pages.
-
RE: Suggestion - How to improve OSE metrics for DA & PA
I'm not directly involved in the project, but I think that's actually part of what they're doing - using Google de-indexation and obvious penalties to train the system, but trying to avoid a system that would have to go look up the site on Google every time it needed to make a prediction.
-
RE: Why is "Noindex" better than a "Canonical" for Pagination?
I guess the short answer is that Google frowns on this practice, since the pages aren't really duplicates. Since they frown on it, they may choose to simply ignore the canonical, and you'll be left with the problem. I think the general problem is that this requires a lot of extra crawling/processing on their part, so it's not that it's "black at" - it's just a pain for them.
I've typically found putting a NOINDEX on pages 2+ is more effective, even in 2014. That said, I do think rel=prev/next has become a viable option, especially if your site isn't high risk for duplicates. Rel=prev/next can, in theory, allow Google to rank any page in the series, without the negative effects of the near-duplicates.
Keep in mind that you can combine rel=prev/next and rel=canonical if you're using sorts/filters/etc. Google does support the use of rel=canonical for variants of the same search page. It gets pretty confusing and the simple truth is that they've made some mixed statements that seem to change over time.
-
RE: Suggestion - How to improve OSE metrics for DA & PA
Thanks - happy to pass that along. We're actually in the middle of a long-term spam detection project to help notify people when a site seems to be suspicious or is likely to be penalized by Google. Eventually, this may find its way into DA/PA. We don't want to use ranking and Google's own numbers, as it creates a bit of a problematic data dependency for us (especially long-term).
-
RE: Sitemap created on client's Joomla site but it is not showing up on site reports as existing? (Thumbs Up To Answers)
I know of no valid reason not to use Google Webmaster Tools - there's way too much paranoia on this subject, IMO. If you're knowingly doing something extremely black-hat, then maybe, but Google can detect most of that without GWT. GWT isn't adding any kind of tracking to your site - it's just revealing to you what they already know, for the most part.
The nice thing about GWT is that it can help validate the XML sitemap and, once validated, help you figure out what's getting indexed.
I suspect these other tools are just looking for some default name for the file, and you're using a non-default one. You can map it, as Thomas said, or you can just tell Google what the filename is. Beyond the SEs, I'm not sure who really needs to process your XML sitemap.
-
RE: Anyone Have a Tool or Method to Track Successful Link Removals?
Unfortunately, I don't think we track a lot of history, beyond raw numbers. I'm not aware of anything where we'd show a link disappearing from the link graph. Majestic tracks more history, but I don't think they do it by the individual link either (we're both focused on link acquisition/growth). You get export our link data, but that gets pretty manual.
Some of the removal tools, like Remove'em (http://www.removeem.com/), claim to do this.
I can imagine building a crawler that would do this, but I'm trying to think of a way to build it from "off the shelf" parts, so to speak, and I'm coming up empty.
-
RE: Disavowin a sitewide link that has Thousands of subdomains. What do we tell Google?
Google does allow root domains in disavow, but I'm honestly not sure how they would handle this with a mega-site with unique sub-domains like Blogspot. Typically, Google treats these sub-domains as stand-alone sites (isolating their PageRank, penalties, etc.). I tend to agree with the consensus, that the best bet is to disavow the individual blogs, and not the entire root domain. If you're really in bad shape and you have much more to lose from Blogspot links than gain, you could disavow the root domain, but I'm not sure if anyone has good data on the potential impact.
-
RE: High domain authority for shady link directories
It's important to keep in mind that DA and PA are measures of the strength of a link profile and, to some degree, a site's/page's raw ranking ability. Our authority metrics don't have built in spam-detection, though, and they aren't always aware of sites that Google may have devalued. Spam analysis has been in the works for quite a while now, and it's a complicated problem (as Google has proven). We're hoping to improve DA/PA in this regard over time, but for now there are going to be some situations where a site doesn't really have the ranking power that it's DA suggests. If your gut feeling is that the bulk of the site's links come from bad directories and low-quality sources, you may very well be correct.
-
RE: Incoming links which don't exists...
If these are all coming from one site, and you're worried about them, this is actually a good case for the disavow tool. You can disavow an entire domain in a single line:
https://support.google.com/webmasters/answer/2648487?hl=en
As Michael said, getting Google to actually recrawl/recache all of those pages can take quite a while. With the ad gone, it's probably a non-issue and they'll eventually clear out, but disavow would remove any lingering doubt.
Unfortunately, there's no way to tell if you've been penalized without knowing more about the site, traffic, etc. I'd say its unlikely for a paid link from a single site, especially if that link was subsequently removed. Googles isn't usually that aggressive about it, especially if your site generally has solid authority/reputation.
-
RE: Google dance tools - do they still give an indication of rankings to come?
Rankings are a lot less stable than they used to be, so I think those days of cleanly seeing differences across data centers are pretty much gone. This is something I deal with a lot in the MozCast project, and today's ranking, even across short time periods on a single data center are pretty volatile, especially for keywords with a news/QDF component. I wrote a bit about it a while back:
http://moz.com/blog/a-week-in-the-life-of-3-keywords
Sometimes, different data centers will show things like features that are in testing and they may have regional differences for locally sensitive queries, but the index itself seems to propagate pretty quickly these days.
-
RE: How much domain authority is passed on through a link from a page with low authority?
Unfortunately, Dr. Matt's out of town, so this answer won't be as thorough as I'd like, but the gist of it is that we factor in both DA and PA to some degree. If a site has very high DA but a very low PA (as in your example), we're going to bias somewhat toward the DA, as we believe Google sometimes does (a virtually unknown page on Wikipedia can rank very well, for example). Likewise, a high-PA page on a site with low DA may pass more through the PA, because (hopefully) that page has some legitimate authority.
It's a bit more complex, because DA and PA are related as well, so they do influence each other to some degree. On larger sites, the influence of any single page's PA is small, but the influence can be more obvious on smaller sites.
Putting aside our math, I think a link from a low-authority page on a high-authority site can be worthwhile. Look at directories, like DMOZ. I think it's important to make sure the low-PA page is indexed (some DMOZ pages aren't, because they're buried so deeply in the link structure), and I wouldn't get hung up on a link from a high-DA site. It's ultimately just one link. I wouldn't dismiss it as useless either, though. There's some amount of site/page balance in the mix and Google doesn't rely on only one type of authority.
-
RE: Using canonical for duplicate contents outside of my domain
Unfortunately, that's a lot more tricky. If you're trying to rank both the .com and .sg version for, let's say, US residents, and those sites have duplicate content, then you do run the risk of Google filtering one of them out. If you use canonical tags or something like that, then one site will be taken out of contention for ranking - in that case, you won't rank for both sites on the same term. The only way to have your cake and eat it too is to make the sites as unique as possible.
Even then, you're potentially going to duplicate effort and cannibalize your own rankings, so it's a risky proposition. In some cases, it may be better to try to promote your social profiles and other pages outside of your site that have some authority. It doesn't have to be your own site ranking, just a site that's generally positive or neutral.
-
RE: How much domain authority is passed on through a link from a page with low authority?
I'm checking with Dr. Matt Peters, head of our Data Science team. It's less about secrecy and more that DA/PA are based on machine learning and have gotten a bit complex. Still, I suspect we can give you a general sense of how DA/PA pass through links. Posting this because I suspect an answer may take a day or two.