Issues with Duplicates and AJAX-Loader
-
Hi,
On one website, the "real" content is loaded via AJAX when the visitor clicks on a tile (I'll call a page with some such tiles a tile-page here). A parameter is added to the URL at the that point and the content of that tile is displayed. That content is available via an URL of its own ... which is actually never called.
What I want to achieve is a canonicalised tile-page that gets all of the tiles' content and is indexed by google - if possible with also recognising that the single-URLs of a tile are only fallback-solutions and the "tile-page" should be displayed instead.
The current tile-page leads to duplicate meta-tags, titles etc and minimal differences between what google considers a page of its own (i.e. the same page with different tiles' contents).
Does anybody have an idea on what one can do here?
-
Hi CleverPhD,
Thanks for your answer! The website is indeed a little dated and did not consider SEO - or so I have been informed.
http://www.g1.de/projekte.aspx is the URL with the most clearest problems, although similar tiles also exist on other pages. As you can see by checking the code, the URL is changed, albeit in a non-ideal way (parameter) and the page basically stays the same with only a tiny fraction of its content changed.
The USAtoday approach is interesting and I will look into it. I have a slight feeling, though, that the approach is quite a bit different(?).
-
I would really need to see the page you mention to make sure I am following you, but I think one approach would be that when the page is called via AJAX, call the actual URL, not the one with the parameter. That way you do not have the 2 URLs that need to be canonicalized to start with. You would still need to test this with a spider program to make sure the URLs are found. I am thinking you would also need a sitemap or alternative navigation to allow the spiders to find the pages and get the cataloged.
All of that said, I have to be honest, my gut is telling me that if you are having to work this hard to get the spider to find the URLs correctly, then you may also have an issue with this design being too clever for what it is worth. You may need to rethink how you approach this. USA today uses a setup that seems similar to yours check it out http://www.usatoday.com/ When you click on a tile to view a story, there is an AJAX type overlay of the home page with the article on top. It allows you to X out and go back to the home page. Likewise from the article you can page through other articles (left and right arrows). While you do this, notice that USA today is updating with an SEO friendly URL. I have not tested this site spider wise, but just by the look of it they seem to have the balance correct.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content question
Hey Mozzers! I received a duplicate content notice from my Cycle7 Communications campaign today. I understand the concept of duplicate content, but none of the suggested fixes quite seems to fit. I have four pages with HubSpot forms embedded in them. (Only two of these pages have showed up so far in my campaign.) Each page contains a title (Content Marketing Consultation, Copywriting Consultation, etc), plus an embedded HubSpot form. The forms are all outwardly identical, but I use a separate form for each service that I offer. I’m not sure how to respond to this crawl issue: Using a 301 redirect doesn’t seem right, because each page/form combo is independent and serves a separate purpose. Using a rel=canonical link doesn’t seem right for the same reason that a 301 redirect doesn’t seem right. Using the Google Search Console URL Parameters tool is clearly contraindicated by Google’s documentation (I don’t have enough pages on my site). Is a meta robots noindex the best way to deal with duplicate content in this case? Thanks in advance for your help. AK
Technical SEO | | AndyKubrin0 -
Indexing Issue
Hi, We have moved one of our domain https://www.mycity4kids.com/ in angular js and after that, i observed the major drop in the number of indexed pages. I crosschecked the coding and other important parameters but didn't find any major issue. What could be the reason behind the drop?
Technical SEO | | ResultFirst0 -
MSNbot Issues
We found msnbot is doing lots of request at same time to one URL, even considering we have caching, it triggers many requests at same time so caching does not help at the moment: For sure we can use mutex to make sure URL waits for cache to generate, but we are looking for solution for MSN boot. 123.253.27.53 [11/Dec/2012:14:15:10 -0600] "GET //Fun-Stuff HTTP/1.1" 200 0 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" 1.253.27.53 [11/Dec/2012:14:15:10 -0600] "GET //Type-of-Resource/Fun-Stuff HTTP/1.1" 200 0 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" 1.253.27.53 [11/Dec/2012:14:15:10 -0600] "GET /Browse//Fun-Stuff HTTP/1.1" 200 6708 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" We found the following solution: http://www.bing.com/community/site_blogs/b/webmaster/archive/2009/08/10/crawl-delay-and-the-bing-crawler-msnbot.aspx Bing offers webmasters the ability to slow down the crawl rate to accommodate web server load issues. User-Agent: * Crawl-Delay: 10 Need to know if it’s safe to apply that. OR any other advices. PS: MSNBot gets so bad at times that it could trigger a DOS attack – alone! (http://www.semwisdom.com/blog/msnbot-stupid-plain-evil#axzz2EqmJM3er).
Technical SEO | | tpt.com0 -
Duplicate Page Content
I've got several pages of similar products that google has listed as duplicate content. I have them all set up with rel="prev" and rel="next tags telling google that they are part of a group but they've still got them listed as duplicates. Is there something else I should do for these pages or is that just a short falling of googles webmaster tools? One of the pages: http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
Technical SEO | | JAARON0 -
How critical is Duplicate content warnings?
Hi, So I have created my first campaign here and I have to say the tools, user interface and the on-page optimization, everything is useful and I am happy with SEOMOZ. However, the crawl report returned thousands of errors and most of them are duplicate content warnings. As we use Drupal as our CMS, the duplicate content is caused by Drupal's pagination problems. Let's say there is a page called "/top5list" , the crawler decided /top5list?page=1" to be duplicate of "/top5list". There is no real solution for pagination problems in Drupal (as far as I know). I don't have any warnings in Google's webmaster tools regarding this and my sitemap I submitted to Google doesn't include those problematic deep pages. (that are detected as duplicate content by SEOMOZ crawler) So my question is, should I be worried about the thousands of error messages in crawler diagnostics? any ideas appreciated
Technical SEO | | Gamer070 -
Duplicate Content
Hi - We are due to launch a .com version of our site, with the ability to put prices into local currency, whereas our .co.uk site will be solely £. If the content on both the .com and .co.uk sites is the same (at product level mainly), will we be penalised? What is the best way to get around this?
Technical SEO | | swgolf1230 -
Moving Duplicate Sites
Apologies in advance for the complexity. My client, company A, has purchased company B in the same industry, with A and B having separate domains. Current hosting arrangement combines registrar and hosting functions in 1 account so as to allow both domains to point to a common folder, with the result that identical content is displayed for both A & B. The current site is kind of an amalgam of A and B. Company A has decided to rebrand and completely absorb company B. The problem is that link value overwhelmingly favours B over A. The current (only) hosting package is Windows, and I am creating a new site and moving them to Linux with another hosting company. I can use 301's for A , but not for B as it is a separate domain and currently shares a hosting package with A. How can I best preserve the link juice that domain B has? The only conclusion I can come up with is to set up separate Linux hosting for B which will allow for the use of 301's. Does anyone have a better idea?
Technical SEO | | waynekolenchuk0 -
Duplicate content
I have to sentences that I want to optimize to different pages for. sentence number one is travel to ibiza by boat sentence number to is travel to ibiza by ferry My question is, can I have the same content on both pages exept for the keywords or will Google treat that as duplicate content and punish me? And If yes, where goes the limit/border for duplicate content?
Technical SEO | | stlastla0