I don't think the risk of harm, done right, is high, but: (1) it's easy to do wrong, and (2) I suspect the benefits are small at best. I think your time/money is better spent elsewhere.
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Posts made by Dr-Pete
-
RE: Infinite Scrolling: how to index all pictures
-
RE: Infinite Scrolling: how to index all pictures
By assigning a URL to each virtual "page", you allow Google to crawl the images, done correctly. What Google is suggesting is that you then set up rel=prev/next between those pages. This tells them to treat all of the image URLs as a paginated series (like a mutli-page article or search results).
My enterprise SEO friends have mixed feelings about rel=prev/next. The evidence of it's effectiveness is limited, but what it's supposed to do is allowing the individual pages (images, in this case) to rank while not looking like duplicate or near-duplicate content. The other options would be to rel=canonical these virtual pages, but then you'd essentially take the additional images out of ranking contention.
This infinite scroll + pagination approach is VERY technical and the implementation is well beyond Q&A's scope (it would take fairly in-depth knowledge of your site). Honestly, my gut reaction is that the time spent wouldn't be worth the gain. Most users won't know to scroll, and having 10-20 pictures vs. just a few may not add that much value. The SEO impact would be relatively small, I suspect. I think there may be easier solutions that would achieve 90% of your goals with a lot less complexity.
-
RE: Infinite Scrolling: how to index all pictures
There should be no real difference, in terms of Google's infinite scroll solution. If you can chunk the content into pages with corresponding URLs, you can put any source code on those pages - text and/or images, along with corresponding alt text, etc. Once you've got one solution implemented, it should work for any kind of HTML. Not sure why images would be different in this case.
There are also ways to create photo galleries that can be crawled, mostly using AJAX. It's complex, but here's one example/discussion:
-
RE: NEw domain extensions, are they worth it seo wise?
Unfortunately, no - still pretty limited on travel - if I take a trip, it's usually to the Moz office. Speaking at MozCon in July and then in the Czech Republic in November.
-
RE: NEw domain extensions, are they worth it seo wise?
Google has been non-committal on this, other than to say the new TLDs won't get any special preference (which is a bit vague). We don't really know yet if those domain keywords will provide SEO benefit. I think most of these will be treated generically, and the keyword in the domain may carry limited benefits.
Personally, if you have a choice between a lousy domain on a traditional extension and a really memorable domain on a new extension, I might lean toward the new extension. I'm talking about homebuilder.construction vs. great-homebuilder-construction-company.org or something like that.
There's the usability aspect, too - I think it's going to take people a while to adjust. If you owned chicago.attorney, people might pick up on that, but they're still used to thinking in terms of .com, etc. There's going to be an adjustment period.
If the price is right and there's a good one out there, it may be worth buying, but I don't think there's going to be much of a gold rush on these new domains.
-
RE: NEw domain extensions, are they worth it seo wise?
Thomas is generally correct here, although Google has since begun treating .co as a "generic" TLD, which is to say they no longer geo-locate it to Colombia. See this reference:
https://support.google.com/webmasters/answer/1347922?hl=en
So, the Colombia association won't hurt, but it won't be geographically connected to Colorado, either. There is some chance that you could pick up the "Co" on a keyword match, if someone searched "Denver, Co" and you owned "denver.co", for example. That's speculation on my part, though. I certainly wouldn't count on any benefit.
-
RE: Infinite Scrolling: how to index all pictures
Keep in mind that just adding 20 images/videos to this page isn't going to automatically increase the quality. Images have limited Google can crawl, and unless they're unique images that you own, they'll potentially be duplicated across the web. If adding those 20 images slows down the page a lot, that could actually harm your SEO and usability.
-
RE: Infinite Scrolling: how to index all pictures
Unfortunately, it depends entirely on your implementation, but the short answer is that it depends if the images are loaded all at once and only displayed by scrolling or if they're loaded as you scroll. The latter is essentially what "infinite scrolling is" - it's generally not actually infinite, but scrolling will cause load events until there's nothing left to load.
The key is that the content has to be crawlable somehow and can't only be triggered by the event, or Google won't see it. So, if you're going to load as you go, the infinite scrolling posts should apply. If the images are pre-loaded, then you shouldn't have a problem, but I'd have to understand the implementation better.
-
RE: Google sets brand/domain name at the end of SERP titles
Unfortunately, there's very little you can do to stop Google from rewriting titles. In some cases, if a title is too long or poorly matches frequent queries, tweaking it can help, but that's often not the case with them adding your brand name. I'm with Bill - I'd try to pin down if Google is pulling this from another source. If it's just coming from your domain, though, there may not be much you can do. There's no directive to tell them to stop rewriting, unfortunately.
-
RE: Why are bit.ly links being indexed and ranked by Google?
Given that Chrome and most header checkers (even older ones) are processing the 301s, I don't think a minor header difference would throw off Google's crawlers. They have to handle a lot.
I suspect it's more likely that either:
(a) There was a technical problem the last time they crawled (which would be impossible to see now, if it had been fixed).
(b) Some other signal is overwhelming or negating the 301 - such as massive direct links, canonicals, social, etc. That can be hard to measure.
I don't think it's worth getting hung up on the particulars of Bit.ly's index. I suspect many of these issues are unique to them. I also expect problems will expand with scale. What works for hundreds of pages may not work for millions, and Google isn't always great at massive-scale redirects.
-
RE: Why are bit.ly links being indexed and ranked by Google?
I was getting 301->403 on SEO Book's header checker (http://tools.seobook.com/server-header-checker/), but I'm not seeing it on some other tools. Not worth getting hung up on, since it's 1 in 70M.
-
RE: Why are bit.ly links being indexed and ranked by Google?
I show the second one (bit.ly/O6QkSI) redirecting to a 403.
Unfortunately, these are only anecdotes, and there's almost no way we could analyze the pattern across 70M indexed pages without a massive audit (and Bitly's cooperation). I don't see anything inherently wrong with their setup, and if you noticed that big of a jump (10M - 70M), it's definitely possible that something temporarily went wrong. In that case, it could take months for Google to clear out the index.
-
RE: Why are bit.ly links being indexed and ranked by Google?
One of those 301s to a 403, which is probably thwarting Google, but the other two seem like standard pages. Honestly, it's tough to do anything but speculate. It may be that so many people are linking to or sharing the short version that Google is choosing to ignore the redirect for ranking purposes (they don't honor signals as often as we like to think). It could simply be that some of them are fairly freshly created and haven't been processed correctly yet. It could be that these URLs got indexed when the target page was having problems (bad headers, down-time, etc.), and Google hasn't recrawled and refreshed those URLs.
I noticed that a lot of our "mz.cm" URLs (Moz's Bitly-powered short domain) seem to be indexed. In our case, it looks like we're chaining two 301s (because we made the domain move last year). It may be that something as small as that chain could throw off the crawlers, especially for links that aren't recrawled very often. I suspect that shortener URLs often get a big burst of activity and crawls early on (since that's the nature of social sharing) but then don't get refreshed very often.
Ultimately, on the scale of Bit.ly, a lot can happen. It may be that 70M URLs is barely a drop in the bucket for Bit.ly as well.
-
RE: Why are bit.ly links being indexed and ranked by Google?
It looks like bit.ly is chaining two 301s: the first one goes to feedproxy.google.com (FeedProxy is like AdSense for feeds, I think), and then the second 301 goes to the destination site. I suspect this intermediary may be part of the problem.
-
RE: Alternative Link Detox tools?
Agreed - it's not much fun, but every reputable link auditor I know uses multiple available sources. All of the tools (including our own at Moz) have different biases, and when you're trying to get a complete a list as possible, you need to use as many sources as you can.
I would highly recommend against going too automated - the cost "savings" short-term could be lost quickly if you start cutting potentially good links. It really depends on your current risk/reward profile. If you're already hit hard with a penalty, then cutting deep and fast may be a good bet (and automation would be more effective). If you're being proactive to prevent future issues, then relying too much on automation could be very dangerous.
-
RE: Why is "Noindex" better than a "Canonical" for Pagination?
I guess the short answer is that Google frowns on this practice, since the pages aren't really duplicates. Since they frown on it, they may choose to simply ignore the canonical, and you'll be left with the problem. I think the general problem is that this requires a lot of extra crawling/processing on their part, so it's not that it's "black at" - it's just a pain for them.
I've typically found putting a NOINDEX on pages 2+ is more effective, even in 2014. That said, I do think rel=prev/next has become a viable option, especially if your site isn't high risk for duplicates. Rel=prev/next can, in theory, allow Google to rank any page in the series, without the negative effects of the near-duplicates.
Keep in mind that you can combine rel=prev/next and rel=canonical if you're using sorts/filters/etc. Google does support the use of rel=canonical for variants of the same search page. It gets pretty confusing and the simple truth is that they've made some mixed statements that seem to change over time.
-
RE: Disavowin a sitewide link that has Thousands of subdomains. What do we tell Google?
Google does allow root domains in disavow, but I'm honestly not sure how they would handle this with a mega-site with unique sub-domains like Blogspot. Typically, Google treats these sub-domains as stand-alone sites (isolating their PageRank, penalties, etc.). I tend to agree with the consensus, that the best bet is to disavow the individual blogs, and not the entire root domain. If you're really in bad shape and you have much more to lose from Blogspot links than gain, you could disavow the root domain, but I'm not sure if anyone has good data on the potential impact.
-
RE: How much domain authority is passed on through a link from a page with low authority?
Unfortunately, Dr. Matt's out of town, so this answer won't be as thorough as I'd like, but the gist of it is that we factor in both DA and PA to some degree. If a site has very high DA but a very low PA (as in your example), we're going to bias somewhat toward the DA, as we believe Google sometimes does (a virtually unknown page on Wikipedia can rank very well, for example). Likewise, a high-PA page on a site with low DA may pass more through the PA, because (hopefully) that page has some legitimate authority.
It's a bit more complex, because DA and PA are related as well, so they do influence each other to some degree. On larger sites, the influence of any single page's PA is small, but the influence can be more obvious on smaller sites.
Putting aside our math, I think a link from a low-authority page on a high-authority site can be worthwhile. Look at directories, like DMOZ. I think it's important to make sure the low-PA page is indexed (some DMOZ pages aren't, because they're buried so deeply in the link structure), and I wouldn't get hung up on a link from a high-DA site. It's ultimately just one link. I wouldn't dismiss it as useless either, though. There's some amount of site/page balance in the mix and Google doesn't rely on only one type of authority.
-
RE: How much domain authority is passed on through a link from a page with low authority?
I'm checking with Dr. Matt Peters, head of our Data Science team. It's less about secrecy and more that DA/PA are based on machine learning and have gotten a bit complex. Still, I suspect we can give you a general sense of how DA/PA pass through links. Posting this because I suspect an answer may take a day or two.