Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Stop google indexing CDN pages
-
Just when I thought I'd seen it all, google hits me with another nasty surprise!
I have a CDN to deliver images, js and css to visitors around the world. I have no links to static HTML pages on the site, as far as I can tell, but someone else may have - perhaps a scraper site?
Google has decided the static pages they were able to access through the CDN have more value than my real pages, and they seem to be slowly replacing my pages in the index with the static pages.
Anyone got an idea on how to stop that?
Obviously, I have no access to the static area, because it is in the CDN, so there is no way I know of that I can have a robots file there.
It could be that I have to trash the CDN and change it to only allow the image directory, and maybe set up a separate CDN subdomain for content that only contains the JS and CSS?
Have you seen this problem and beat it?
(Of course the next thing is Roger might look at google results and start crawling them too, LOL)
P.S. The reason I am not asking this question in the google forums is that others have asked this question many times and nobody at google has bothered to answer, over the past 5 months, and nobody who did try, gave an answer that was remotely useful. So I'm not really hopeful of anyone here having a solution either, but I expect this is my best bet because you guys are always willing to try.
-
Thank you Edward.
I don't have quite that problem, but I think you are right too.
My CDN is set up to be Origin Pull.
That means there is no need to FTP - the system just fetches content as requested.
- you should check that out if you have to ftp everything.
But what you said that helped me is this - that I should have had one CNAME for images and anotehr CNAME for content and the content should be limited to a folder called content, so I can put the CSS files and the JS files in it and that way, the plain HTML pages at teh root level will never be affected.
I also realized, while checking the system, that I wasn't using a canonical tag in the intermediate pages, as I was in the story pages. So I just added code to add canonical tags for all the intermediate pages and the front page.
I do have a few other types of pages, so I will handle the code for them next.
I think adding the canonical tag might fix the problem, but I will also work on reconfiguring the CDN and change over when the action is not too busy, in case it takes a while to propagate.
-
It sounds like you have set up your CDN slightly wrong.
After setting up a few like you have I realised that I was actually making a complete duplicate of the site rather than just the images or assets
I imagine you have your origin directory for the CDN in the public html folder.
Create a subdomain, set that as the origin.
Eg.. I'm working on this site at the moment: http://looksfishy.co.uk/
I have a subdomain called assets: http://assets.looksfishy.co.uk/
The cdn content: http://cdn.looksfishy.co.uk/
Files uploaded here:
http://assets.looksfishy.co.uk/species/holder/pike.jpg
Displayed here:
http://cdn.looksfishy.co.uk/species/holder/pike.jpg
Check the ip address on them.
It does make uploading images by ftp a bit of a faff, but does make your site better
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Cache
So, when I gain a link I always check to see if the page that is linking is in the Google cache. I've noticed recently that more and more pages are actually not showing up in Google's cache, yet still appear in search results. I did read an article from someone whoo works at Google a few weeks back that there is sometimes an error with the cache and occasionally the cache will not display. This week, my own website isn't showing up in the cache yet I'm still ranking in SERP's. I'm not worried about it, mostly whitehat, but has there been any indication that Google are phasing out the ability to check cache's of websites?
Algorithm Updates | | ThorUK0 -
On page vs Off page vs Technical SEO: Priority, easy to handle, easy to measure.
Hi community, I am just trying to figure out which can be priority in on page, off page and technical SEO. Which one you prefer to go first? Which one is easy to handle? Which one is easy to measure? Your opinions and suggestions please. Expecting more realistic answers rather than usual check list. Thanks
Algorithm Updates | | vtmoz0 -
How often should I update the content on my pages?
I have started dropping on my rankings - due to lack of time after having a baby. I'm still managing to blog but I'm wondering if I update the content on my pages will that help? All my Meta tags and page descriptions were updated over a year ago - do I need to update these too? We were ranking in the top spots for a good few years, but we're slowly falling 😞 Please give me any advice to keep us from falling even further. I have claimed all my listings, and try to add new links once a month. I share my blog to all social sites and work hard to get Google reviews, we have 53 which is higher than any of our competitors. Any other ideas? Have I missed something that Google is looking for nowadays? Many thanks 🙂
Algorithm Updates | | Lauren16890 -
Does Google ignores page title suffix?
Hi all, It's a common practice giving the "brand name" or "brand name & primary keyword" as suffix on EVERY page title. Well then it's just we are giving "primary keyword" across all pages and we expect "homepage" to rank better for that "primary keyword". Still Google ranks the pages accordingly? How Google handles it? The default suffix with primary keyword across all pages will be ignored or devalued by Google for ranking certain pages? Or by the ranking of website improves for "primary keyword" just because it has been added to all page titles?
Algorithm Updates | | vtmoz0 -
Using Google to find a discontinued product.
Hi Guys. I mostly use this forum for business questions, but now it's a personal one! I'm trying to find a supplier that might still have discontinued product. It's the Behritone C5A speaker monitor. All my searches bring up a plethora of pages that appear to sell the product... but they have no stock. (Wouldn't removing these pages make for a better internet?) No 2nd hand ones on eBay 😞 Do you have any suggestion about how I can get more relevant results... i.e find supplier that might still have stock? Any tips or trick I may be able to use to help me with this? Many thanks in advance to an awesome community 🙂 Isaac.
Algorithm Updates | | isaac6631 -
How long for google to de-index old pages on my site?
I launched my redesigned website 4 days ago. I submitted a new site map, as well as submitted it to index in search console (google webmasters). I see that when I google my site, My new open graph settings are coming up correct. Still, a lot of my old site pages are definitely still indexed within google. How long will it take for google to drop off or "de-index" my old pages? Due to the way I restructured my website, a lot of the items are no longer available on my site. This is on purpose. I'm a graphic designer, and with the new change, I removed many old portfolio items, as well as any references to web design since I will no longer offering that service. My site is the following:
Algorithm Updates | | rubennunez
http://studio35design.com0 -
Is it possible that Google may have erroneous indexing dates?
I am consulting someone for a problem related to copied content. Both sites in question are WordPress (self hosted) sites. The "good" site publishes a post. The "bad" site copies the post (without even removing all internal links to the "good" site) a few days after. On both websites it is obvious the publishing date of the posts, and it is clear that the "bad" site publishes the posts days later. The content thief doesn't even bother to fake the publishing date. The owner of the "good" site wants to have all the proofs needed before acting against the content thief. So I suggested him to also check in Google the dates the various pages were indexed using Search Tools -> Custom Range in order to have the indexing date displayed next to the search results. For all of the copied pages the indexing dates also prove the "bad" site published the content days after the "good" site, but there are 2 exceptions for the very 2 first posts copied. First post:
Algorithm Updates | | SorinaDascalu
On the "good" website it was published on 30 January 2013
On the "bad" website it was published on 26 February 2013
In Google search both show up indexed on 30 January 2013! Second post:
On the "good" website it was published on 20 March 2013
On the "bad" website it was published on 10 May 2013
In Google search both show up indexed on 20 March 2013! Is it possible to be an error in the date shown in Google search results? I also asked for help on Google Webmaster forums but there the discussion shifted to "who copied the content" and "file a DMCA complain". So I want to be sure my question is better understood here.
It is not about who published the content first or how to take down the copied content, I am just asking if anybody else noticed this strange thing with Google indexing dates. How is it possible for Google search results to display an indexing date previous to the date the article copy was published and exactly the same date that the original article was published and indexed?0 -
Does google index non-public pages ie. members logged in page
hi, I was trying to locate resources on the topics regarding how much the google bot indexes in order to qualify a 'good' site on their engine. For example, our site has many pages that are associated with logged in users and not available to the public until they acquire a login username and password. Although those pages show up in google analytics, they should not be made public in the google index which is what happens. In light of Google trying to qualify a site according to how 'engaged' a user is on the site, I would feel that the activities on those member pages are very important. Can anyone offer suggestions on how Google treats those pages since we are planning to do further SEO optimization of those pages. Thanks
Algorithm Updates | | jumpdates0