Dupe-content-wise, you should be fine. iFrames just make me itchy these days, and I've never thought they were good for users, but it shouldn't be a disaster for SEO. The biggest problem is probably just that you're not really getting any SEO value - it's really just direct traffic via a referring site. Granted, it's better than nothing, and I know from painful experience that sometimes you have to take what you can get in these situations.
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Posts made by Dr-Pete
-
RE: Value of an embedded site vs. a direct link?
-
RE: Value of an embedded site vs. a direct link?
When you say "embed", do you know what they have in mind, specifically (that word means a couple of specific things depending on the context). If they're just looking to copy the content, then it's important that the link back to you and probably even use cross-domain canonical tags. Otherwise, they'll be competing with you for your own content. It's not just a matter of traffic, but Google could filter out your version of the page or even (at large scale) devalue your entire site. In other words, they could mistake you for the one copying the content, especially if the other sites are more authoritative.
If you're talking about old-school embedding, like wrapping up your content in an iFrame or something like that, I'd avoid it entirely. Those "solutions" are outdated and more trouble than they're worth.
It is common to "embed" some content, like infographics, but those embeds usually have a link back or some clear attribution. If you're just talking about using the content, then I think you're much better off just asking people to use snippets (like a paragraph or two) and then linking to the source.
If you've got a specific example of what someone has in mind, I'd be happy to dig deeper.
-
RE: Removing Dynamic "noindex" URL's from Index
Hooray! Usually, I just give my advice and then run away, so it's always nice to hear I was actually right about something
Seriously, glad you got it sorted out.
-
RE: Canonical URLs and Sitemaps
With the canonical tag in place, I'm guessing that extra link would basically be ignored. It's probably harmless, but I'm not sure it will do anything. You could create an HTML "sitemap" (or even an XML sitemap) with the canonical URLs. It's not my first choice, but it at least would give Google an extra push.
-
RE: Duplicate title-tags with pagination and canonical
I suspect you're ok, then. I'd watch those GWT numbers, but unless you're seeing problems with indexation and ranking, then I'd just consider that a notice. I think you're handling it by the book, at least as well as currently possible with Google's changing and somewhat mixed signals on the subject.
-
RE: Duplicate title-tags with pagination and canonical
Unfortunately, it can be really tough to tell if Google is honoring the rel=prev/next tags, but I've had gradually better luck with those tags this year. I honestly the GWT issue is a mistake on Google's part, and probably isn't a big deal. They do technically index all of the pages in the series, but the rel=prev/next tags should mitigate any ranking issues that could occur from near-duplicate content. You could add the page # to the title, but I doubt it would have any noticeable impact (other than possibly killing the GWT warning).
I would not canonical to the top page - that's specifically not recommended by Google and has fallen in disfavor over the past couple of years. Technically, you can canonical to a "View All" page, but that has its own issues (practically speaking - such as speed and usability).
Do you have any search/sort filters that may be spinning out other copies, beyond just the paginated series? That could be clouding the issue, and these things do get complicated.
I've had luck in the past with using META NOINDEX, FOLLOW on pages 2+ of pagination, but I've gradually switched to rel=prev/next. Google seems to be getting pickier about NOINDEX, and doesn't always follow the cues consistently. Unfortunately, this is true for all of the cues/tags these days.
Sorry, that's a very long way of saying that I suspect you're ok in this case, as long as the tags are properly implemented. You could tell GWT to ignore the page= parameter in parameter handling, but I'm honestly not sure what impact that has in conjunction with rel=prev/next. It might kill the warning, but the warning's just a warning.
-
RE: Removing Dynamic "noindex" URL's from Index
Is there a crawl path to them currently? One issue I see a lot is that a bunch of pages get indexed, the path is found and cut off, NOINDEX (canonical, 301, etc.) is added, but then the pages never get re-crawled. Since they don't get recrawled, the page-level directive never gets honored.
If there's a URL parameter involved, you could use parameter-handling in GWT - it's not a perfect solution, but it sometimes seems to work without a re-crawl.
The other option would be to create a new XML sitemap with all of the bad/indexed URLs. This may push Google to re-crawl them and then see the tags to deindex. It's a bit safer than re-opening the crawl paths.
If they are being crawled and Google is just ignoring the NOINDEX for some reason, I'd try to 301 or canonical those pages to a primary search page, if that's feasible (probably canonical, since you don't want the users to 301). Sometimes, if a signal isn't working for that long, you just have to shake Google and try a different signal. Even following their exact recommendations, it rarely works as planned at large scale.