Blocking AJAX Content from being crawled
-
Our website has some pages with content shared from a third party provider and we use AJAX as our implementation. We dont want Google to crawl the third party's content but we do want them to crawl and index the rest of the web page. However, In light of Google's recent announcement about more effectively indexing google, I have some concern that we are at risk for that content to be indexed.
I have thought about x-robots but have concern about implementing it on the pages because of a potential risk in Google not indexing the whole page. These pages get significant traffic for the website, and I cant risk.
Thanks,
Phil
-
Hey Phil. I think I've fully understood your situation but just to be clear I'm presuming you've URL's exposing 3rd party JSON/XML content that you don't want being indexed by Google. Probably the most foolproof method for this case is using the "X-Robots-Tag" HTTP header convention (http://code.google.com/web/controlcrawlindex/docs/robots_meta_tag.html). I would recommend going with "X-Robots-Tag: none", which should do the trick (I really don't think "noarchive" or other options are required if they're not indexing it at all). You'll need to modify your server-side scripts to do this. I'm assuming there's not much pain required for you (or the 3rd-party?) to do this. Hope this helps! ~bryce
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content and Subdirectories
Hi there and thank you in advance for your help! I'm seeking guidance on how to structure a resources directory (white papers, webinars, etc.) while avoiding duplicate content penalties. If you go to /resources on our site, there is filter function. If you filter for webinars, the URL becomes /resources/?type=webinar We didn't want that dynamic URL to be the primary URL for webinars, so we created a new page with the URL /resources/webinar that lists all of our webinars and includes a featured webinar up top. However, the same webinar titles now appear on the /resources page and the /resources/webinar page. Will that cause duplicate content issues? P.S. Not sure if it matters, but we also changed the URLs for the individual resource pages to include the resource type. For example, one of our webinar URLs is /resources/webinar/forecasting-your-revenue Thank you!
Technical SEO | | SAIM_Marketing0 -
What is a good crawl budget?
Hi Community! I am in the process of updating sitemaps and am trying to obtain a standard for what is considered "strong" crawl budget? Every documentation I've found includes how to make it better or what to watch out for. However, I'm looking for an amount to obtain for (ex: 60% of the sitemap has been crawled, 100%, etc.)
Technical SEO | | yaelslater1 -
Sitemap For Static Content And Blog
We'll be uploading a sitemap to google search console for a new site. We have ~70-80 static pages that don't really chance much (some may change as we modify a couple pages over the course of the year). But we have a separate blog on the site which we will be adding content to frequently. How can I set up the sitemap to make sure that "future" blog posts will get picked up and indexed. I used a sitemap generator and it picked up the first blog post that's on the site, but am wondering what happens with future ones? I don't want to resubmit a new sitemap each time that has a link to a new blog post we posted.
Technical SEO | | vikasnwu0 -
Duplicate content and 404 errors
I apologize in advance, but I am an SEO novice and my understanding of code is very limited. Moz has issued a lot (several hundred) of duplicate content and 404 error flags on the ecommerce site my company takes care of. For the duplicate content, some of the pages it says are duplicates don't even seem similar to me. additionally, a lot of them are static pages we embed images of size charts that we use as popups on item pages. it says these issues are high priority but how bad is this? Is this just an issue because if a page has similar content the engine spider won't know which one to index? also, what is the best way to handle these urls bringing back 404 errors? I should probably have a developer look at these issues but I wanted to ask the extremely knowledgeable Moz community before I do 🙂
Technical SEO | | AliMac260 -
Migrate Old Archive Content?
Hi, Our team has recently acquired several newsletter titles from a competitor. We are currently deciding how to handle the archive content on their website which now belongs to us. We are thinking of leaving the content on their site (so as not to suddenly remove a chunk of their website and harm them) but also replicating it on ours with a canoncial link to say our website is the original source. The articles on their site go back as far as 2010. Do you think it would help or hinder our site to have a lot of old archive content added to it? I'm thinking of content freshness issues.Even though the content is old some of it will still be interesting or relevant. Or do you think the authority and extra traffic this content could bring in makes it worth migrating. Any help gratefully received on the old content issue or the idea of using canonical links in this way. Many Thanks
Technical SEO | | frantan0 -
Duplicate Page Content
Hi, I just had my site crawled by the seomoz robot and it came back with some errors. Basically it seems the categories and dates are not crawling directly. I'm a SEO newbie here Below is a capture of the video of what I am talking about. Any ideas on how to fix this? Hkpekchp
Technical SEO | | mcardenal0 -
Why has Google stopped indexing my content?
Mystery of the day! Back on December 28th, there was a 404 on the sitemap for my website. This lasted 2 days before I noticed and fixed. Since then, Google has not indexed my content. However, the majority of content prior to that date still shows up in the index. The website is http://www.indieshuffle.com/. Clues: Google reports no current issues in Webmaster tools Two reconsideration requests have returned "no manual action taken" When new posts are detected as "submitted" in the sitemap, they take 2-3 days to "index" Once "indexed," they cannot be found in search results unless I include url:indieshuffle.com The sitelinks that used to pop up under a basic search for "Indie Shuffle" are now gone I am using Yoast's SEO tool for Wordpress (and have been for years) Before December 28th, I was doing 90k impressions / 4.5k clicks After December 28th, I'm now doing 8k impressions / 1.3k clicks Ultimately, I'm at a loss for a possible explanation. Running an SEOMoz audit comes up with warnings about rel=canonical and a few broken links (which I've fixed in reaction to the report). I know these things often correct themselves, but two months have passed now, and it continues to get progressively worse. Thanks, Jason
Technical SEO | | indieshuffle0 -
How do I combat content theft?
A new site popped up that has completely replicated a site own by my client. This site is literally a copycat, scraped all the content, and copied the design down to the colors. I've already reported the site to the hosting provider and filled a spam report on Google. I noticed that the author changed some of the text, and internal links so that they don't link to our site anymore. Some of these were missed. I'm also going to take a couple preventative actions like change stuff in .htaccess, but that doesn't help me now, just in case it happens again in the future. I'm wondering what else i can or should be doing?
Technical SEO | | flowsimple0