Questions created by RosemarieReed
-
Subdomained White-Label Sites
Wanted to pass along a specific use-case that I'm thinking through in the technical setup for a client. Site: http://www.abc.com is an ecommerce company that offers the ability to white-label a site so an affiliate can join and get access to the site, and ultimately get a cut of whatever is sold through that affiliate. So I join the site and get access to scott.xyz.com and can handle my business through that. From a technical standpoint, this is the proposed technical setup of the site. Canonical URLS will be set to www.xyz.com Pages on scott.xyz.com will be set to noindex, while the main www.xyz.com will be set to be indexed Webmaster Tools for scott.xyz.com will be set to have preferred domain of www.xyz.com scott.xyz.com will have separate robots.txt instructing to block crawl Questions Am I missing any steps in properly setting up the technical background of the subdomain sites? The use of subdomains isn't something that I am able to move away from. Will any links in to scott.xyz.com pass juice and authority to www.xyz.com, or does the noindex/nocrawl block that from happening? Is there anything else that I am missing? Thanks!
Intermediate & Advanced SEO | | RosemarieReed
Scott0 -
Ecommerce Tabs
This isn't a unique problem but an e-commerce client has product information on a page, with separate tabs that have been historically loaded with a new page, which have been indexed. Product (/product): 8,450 Results Content1 (/product?tab=content1): 966 results Content2 (/product?tab=content2): 683 Results Content3 (/product?tab=content3): 1,750 Results Content4 (/product?tab=content4): 1,500 Results All of the content shares a common product top section (summary of information) but has unique canonical url definitions, meta information, etc. The individual content tabs are all part of a larger grouping, which is why their index level is considerably less than the actual product page. As the client grows and updates this historical practice, one of the implementation options is making the content available on the page via an Ajax load. The desire would be to maintain the ability to search for content1, content2, etc at that level and not spread the juice throughout all the main product pages. My question is what would the best setup be to maintain the historical ability to target the content individually via Search, while updating the UI/UX for a better customer experience? If the ajax route is the way to go, what are all the tasks necessary to properly handle without creating a separate duplicate pathing? Some of the tasks that I've outlined would be Using pushState to update the url when the tab is changed Is there an ability to also update canonicals & meta information? what else am I missing? Any guidance would be great as Id love to get some thoguhts on the matter. Thanks!
Intermediate & Advanced SEO | | RosemarieReed0 -
Ajax Module Crawability vs. WMT Fetch & Render
Recently a module was built into the homepage to pull in content from an outside source via Ajax and I'm curious about the overall crawability of the content. In WMT, if I fetch & render the content it displays correctly, but if I view source all I am seeing is the empty container. Should I take additional steps so that the actual AJAX content displays in my source code, or am I "good" since the content does display correctly when I fetch & render?
Intermediate & Advanced SEO | | RosemarieReed0