My company uses a proprietary AB testing platform. We are testing out an entirely new experience on our product pages, but it is not optimized for SEO. The testing framework will not show the challenger recipe to search bots. With that being said, to avoid any risks of cloaking, what is an acceptable sample size (or percentage) of traffic to funnel into this test?
Posts made by edmundsseo
-
Excluding Googlebot From AB Test - Acceptable Sample Size To Negate Cloaking Risk?
-
RE: Letting Others Use Our Content: Risk-Free Attribution Methods
This is exactly my concern. Our site is massive in it's own industry, but this other site is a top player across many industries - surely we'd be impacted by such an implementation without some steps taken to confirm attribution.
Thank you for confirming my suspicions.
-
RE: Letting Others Use Our Content: Risk-Free Attribution Methods
Thank you for chiming in Eric!
There pages already rank extraordinarily well. #1 for almost every related term that they have products for, across the board.
They're also not open to linking back to our content.
-
Letting Others Use Our Content: Risk-Free Attribution Methods
Hello Moz!
A massive site that you've all heard of is looking to syndicate some of our original editorial content. This content is our bread and butter, and is one of the primary reasons why people use our site.
Note that this site is not a competitor of ours - we're in different verticals.
If this massive site were to use the content straight up, I'm fairly confident that they'd begin to outrank us for related terms pretty quickly due to their monstrous domain authority.
This is complex because they'd like to use bits and pieces of the content interspersed with their own content, so they can't just implement a cross-domain canonical. It'd also be difficult to load the content in an iframe with noindex,nofollow header tags since their own content (which they want indexed) will be mixed up with ours.
They're also not open to including a link back to the product pages where the corresponding reviews live on our site.
Are there other courses of action that could be proposed that would protect our valuable content?
Is there any evidence that using schema.org (Review and Organization schemas) pointing back to our review page URLs would provide attribution and prevent them from outranking us for associated terms?
-
Latest Best Practices for Single Page Applications
What are the latest best practices for SPA (single page application) experiences?
Google is obviously crawling Javascript now, but is there any data to support that they crawl it as effectively as they do static content?
Considering Bing (and Yahoo) as well as social (FB, Pinterest, etc) - what is the best practice that will cater to the lowest-common denominator bots and work across the board?
Is a prerender solution still the advised route?
Escaped fragments with snapshots at the expanded URLs, with SEO-friendly URL rewrites?
-
RE: Desktop & Mobile Sitemaps Covering The Same Ground - Any Benefit To Having Both?
Yes, it's responsive design with the exact same URLs for both mobile and desktop.
Thanks for your helpful response!
-
Desktop & Mobile Sitemaps Covering The Same Ground - Any Benefit To Having Both?
If my URL structure is the same for the desktop and mobile experience, is there any benefit to creating a mobile sitemap, considering that the sitemap for our desktop site covers the same URLs?
-
RE: Blog on subdomain?
I thought Google stopped treating subdomains as separate entities? In the follow video, Matt Cutts says that they're essentially the same as subdirectories now: http://www.youtube.com/watch?v=_MswMYk05tk&feature=youtube_gdata
Have you seen evidence that shows otherwise? Not challenging your answer here; I'm genuinely curious.
-
RE: Navigation for Users vs Spiders
Sorry, I should have clarified, the navigation utilized AJAX, so the links don't actually appear anywhere in the source. We do have breadcrumbs on the product pages. Thanks!
-
Navigation for Users vs Spiders
We're creating a new global site nav that provides a great user experience, but may be less than ideal for the search engines. The user selects an item from category A, and is then presented options to choose from in category B, and then chooses a specific product. The user does not encounter any actual "links" until they choose the specific product.
The search engines won't see this navigation path due to the way that the navigation is coded. They're unable to choose an item from A, so they can't get to B, and therefore cannot get to C, which is the actual product page.
We'd like to create an alternative nav for the browsers, so that they can crawl the category pages for A and B, as well as the specific product pages (C).
This alternative nav would be displayed if the user does not have javascript enabled. Otherwise, the navigation described above will be shown to the user.
Moving forward, the navigation that the user sees may be different from what is shown to the search engine, based on user preferences (ie they may only see some of the categories in the nav, while the search engines will see links to all category/product pages).
I know that, as a general rule, it's important that the search engines see the same thing that the user sees. Does the strategy outlined above put us at risk for penalties?
-
Broken sitemaps vs no sitemaps at all?
The site I am working on is enormous. We have 71 sitemap files, all linked to from a sitemap index file.
The sitemaps are not up to par with "best practices" yet, and realistically it may be another month or so until we get them cleaned up.
I'm wondering if, for the time being, we should just remove the sitemaps from Webmaster Tools altogether. They are currently "broken", and I know that sitemaps are not mandatory. Perhaps they're doing more harm than good at this point? According to Webmaster Tools, there are 8,398,082 "warnings" associated with the sitemap, many of which seem to be related to URLs being linked to that are blocked by robots.txt.
I was thinking that I could remove them and then keep a close eye on the crawl errors/index status to see if anything changes.
Is there any reason why I shouldn't remove these from Webmaster Tools until we get the sitemaps up to par with best practices?
-
RE: Scanning For Duplicate Canonical Tags
Paul,
Thanks for your reply! I have used the paid version of Screaming Frog with regex to exclude pages with certain parameters, but I have not tried the custom queries.
Could you give me an example of a custom query that would find empty canonical tags? That would be extremely helpful.
-
Scanning For Duplicate Canonical Tags
I'm looking for a solution for identifying pages on a site that have either empty/undefined canonical tags, or duplicate canonical tags (meaning the tag occurs twice within the same page).
I've used Screaming Frog to view sitewide canonical values, but the tool cannot identify when pages use the tag twice, nor can it differentiate between pages that have an empty canonical tag and pages that have no canonical tag at all.
Any help finding a tool of some sort that can assist me in doing this would be much appreciated, as I'm working with tens of thousands of pages and can't do this manually.