Very good to hear, thanks Shawn! The goal is to use absolute canonicals, but for a period of time, we may have to use protocol relative. The redirects in place should avoid any duplicate content issues, which seems to be the big landmine.
Posts made by linklater
-
RE: Can I leave off HTTP/HTTPS in a canonical tag?
-
RE: Can I leave off HTTP/HTTPS in a canonical tag?
Hey Shawn, did using an unspecified HTTP/HTTPS protocol work for you in the canonical and/or HREF-LANG? We are going through a transition to HTTPS as well, and have multiple systems with some URLs that are hard coded. Hoping this solution would work as a short-term fix, while we update these pages to use a new, more dynamic system.
-
RE: Can we create regional pages on Facebook?
Yes, it is still possible. I see this features utilized by a number of brands in my vertical.
The redirect is automatic, based on location, and underneath the brand name and segment there's a link with the wording "Redirected from [main_page_name]".
The Like count is shared, and if a user mouses over the "redirected from..." link text they see a pop-up with the header of the primary page.
Definitely do-able, and I would say increasingly popular for major brands with a strong international presence.
Cheers!
-
How does Tripadviser ensure all their user reviews get crawled?
Tripadvisor has a LOT of user generated content. Searching for a random hotel always seems to return a paginated list of 90+ pages. However once the first page is clicked and "#REVIEWS" is appended to the URL, the URL never changes with any subsequent clicks of the paginated links.
How do they ensure that all this review content gets crawled?
Thanks,
linklater
-
Pitfalls when implementing the “VARY User-Agent” server response
We serve up different desktop/mobile optimized html on the same URL, based on a visitor’s device type.
While Google continue to recommend the HTTP Vary: User-Agent header for mobile specific versions of the page (http://www.youtube.com/watch?v=va6qtaiZRHg), we’re also aware of issues raised around CDN caching; http://searchengineland.com/mobile-site-configuration-the-varies-header-for-enterprise-seo-163004 / http://searchenginewatch.com/article/2249533/How-Googles-Mobile-Best-Practices-Can-Slow-Your-Site-Down / http://orcaman.blogspot.com/2013/08/cdn-caching-problems-vary-user-agent.html
As this is primarily for Google's benefit, it's been proposed that we only returning the Vary: User-Agent header when a Google user agent is detected (Googlebot/MobileBot/AdBot).
So here's the thing: as the server header response is not “content” per se I think this could be an okay solution, though wanted to throw it out there to the esteemed Moz community and get some additional feedback.
You guys see any issues/problems with implementing this solution?
Cheers!
linklater
-
Topical Sitelinks
This is a question about the non-branded, topical sitelinks SERP - i.e. those sitelinks without additional descriptive text, that tend to be shown for queries that are information centered as opposed to brand-centric.
I'm been noticing an increase in these among some big brands in the travel vertical, and wanted to throw it out there to the esteemed Moz community.
Are these the preserve of big site SEO/brands, or is there anything we can/should be doing to increase our chances of getting these topical sitelinks to appear?
Cheers!
-
RE: Help - we're blocking SEOmoz cawlers
Hi Keri,
Still testing, though i see no reason why this shouldn't work so will close the QA ticket.
cheers!
-
RE: How to get rogerbot whitelisted for application firewalls.
Hi Joel,
Just wondering if you guys came up with a fix for this one? I have the same issue myself...
Thanks!
-
RE: Help - we're blocking SEOmoz cawlers
We maintain a crawler (and others) blacklist to control server loads, so I'm just looking for the useragent string I can add to the white list. this one should do the trick;
Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
-
RE: Help - we're blocking SEOmoz cawlers
Thanks Gerd, though looks like your robots.txt is a disallow rule, when I'm looking to let it through.
I'll give this one a try: Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
-
Help - we're blocking SEOmoz cawlers
We have a fairly stringent blacklist and by the looks of our crawl reports we've begin unintentionally blocking the SEOmoz crawler.
can you guys let me know the useragent string and anything else I need to enable mak sure you're crawlers are whitelisted?
Cheers!
-
How many times robots.txt gets visited by crawlers, especially Google?
Hi,
Do you know if there's any way to track how often robots.txt file has been crawled?
I know we can check when is the latest downloaded from webmaster tool, but I actually want to know if they download every time crawlers visit any page on the site (e.g. hundreds of thousands of times every day), or less.
thanks...
-
Big site SEO: To maintain html sitemaps, or scrap them in the era of xml?
We have dynamically updated xml sitemaps which we feed to Google et al.
Our xml sitemap is updated constantly, and takes minimal hands on management to maintain.
However we still have an html version (which we link to from our homepage), a legacy from back in the pre-xml days. As this html version is static we're finding it contains a lot of broken links and is not of much use to anyone.
So my question is this - does Google (or any other search engine) still need both, or are xml sitemaps enough?
-
Multi-lingual SEO: Country-specific TLD's, or migration to a huge .com site?
Dear SEOmoz team,
I’m an in-house SEO looking after a number of sites in a competitive vertical. Right now we have our core example.com site translated into over thirty different languages, with each one sitting on its own country-specific TLD (so example.de, example.jp, example.es, example.co.kr etc…).
Though we’re using a template system so that changes to the .com domain propagate across all languages, over the years things have become more complex in quite a few areas. For example, the level of analytics script hacks and filters we have created in order to channel users through to each language profile is now bordering on the epic.
For a number of reasons we’ve recently been discussing the cost/benefit of migrating all of these languages into the single example.com domain. On first look this would appear to simplify things greatly; however I’m nervous about what effect this would have on our organic SE traffic.
All these separate sites have cumulatively received years of on/off-site work, and even if we went through the process of setting up page-for-page redirects to their new home on example.com, I would hate to lose all this hard-work (and business) if we saw our rankings tank as a result of the move.
So I guess the question is, for an international business such as ours, which is the optimal site structure in the eyes of the search engines; Local sites on local TLD’s, or one mammoth site with language identifiers in the URL path (or subdomains)?
Is Google still so reliant on TLD for geo targeting search results, or is it less of a factor in today’s search engine environment?
Cheers!