Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Stop google indexing CDN pages
-
Just when I thought I'd seen it all, google hits me with another nasty surprise!
I have a CDN to deliver images, js and css to visitors around the world. I have no links to static HTML pages on the site, as far as I can tell, but someone else may have - perhaps a scraper site?
Google has decided the static pages they were able to access through the CDN have more value than my real pages, and they seem to be slowly replacing my pages in the index with the static pages.
Anyone got an idea on how to stop that?
Obviously, I have no access to the static area, because it is in the CDN, so there is no way I know of that I can have a robots file there.
It could be that I have to trash the CDN and change it to only allow the image directory, and maybe set up a separate CDN subdomain for content that only contains the JS and CSS?
Have you seen this problem and beat it?
(Of course the next thing is Roger might look at google results and start crawling them too, LOL)
P.S. The reason I am not asking this question in the google forums is that others have asked this question many times and nobody at google has bothered to answer, over the past 5 months, and nobody who did try, gave an answer that was remotely useful. So I'm not really hopeful of anyone here having a solution either, but I expect this is my best bet because you guys are always willing to try.
-
Thank you Edward.
I don't have quite that problem, but I think you are right too.
My CDN is set up to be Origin Pull.
That means there is no need to FTP - the system just fetches content as requested.
- you should check that out if you have to ftp everything.
But what you said that helped me is this - that I should have had one CNAME for images and anotehr CNAME for content and the content should be limited to a folder called content, so I can put the CSS files and the JS files in it and that way, the plain HTML pages at teh root level will never be affected.
I also realized, while checking the system, that I wasn't using a canonical tag in the intermediate pages, as I was in the story pages. So I just added code to add canonical tags for all the intermediate pages and the front page.
I do have a few other types of pages, so I will handle the code for them next.
I think adding the canonical tag might fix the problem, but I will also work on reconfiguring the CDN and change over when the action is not too busy, in case it takes a while to propagate.
-
It sounds like you have set up your CDN slightly wrong.
After setting up a few like you have I realised that I was actually making a complete duplicate of the site rather than just the images or assets
I imagine you have your origin directory for the CDN in the public html folder.
Create a subdomain, set that as the origin.
Eg.. I'm working on this site at the moment: http://looksfishy.co.uk/
I have a subdomain called assets: http://assets.looksfishy.co.uk/
The cdn content: http://cdn.looksfishy.co.uk/
Files uploaded here:
http://assets.looksfishy.co.uk/species/holder/pike.jpg
Displayed here:
http://cdn.looksfishy.co.uk/species/holder/pike.jpg
Check the ip address on them.
It does make uploading images by ftp a bit of a faff, but does make your site better
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Cache
So, when I gain a link I always check to see if the page that is linking is in the Google cache. I've noticed recently that more and more pages are actually not showing up in Google's cache, yet still appear in search results. I did read an article from someone whoo works at Google a few weeks back that there is sometimes an error with the cache and occasionally the cache will not display. This week, my own website isn't showing up in the cache yet I'm still ranking in SERP's. I'm not worried about it, mostly whitehat, but has there been any indication that Google are phasing out the ability to check cache's of websites?
Algorithm Updates | | ThorUK0 -
More pages or less pages for best SEO practices?
Hi all, I would like to know the community's opinion on this. A website with more pages or less pages will rank better? Websites with more pages have an advantage of more landing pages for targeted keywords. Less pages will have advantage of holding up page rank with limited pages which might impact in better ranking of pages. I know this is highly dependent. I mean to get answers for an ideal website. Thanks,
Algorithm Updates | | vtmoz1 -
Using Google to find a discontinued product.
Hi Guys. I mostly use this forum for business questions, but now it's a personal one! I'm trying to find a supplier that might still have discontinued product. It's the Behritone C5A speaker monitor. All my searches bring up a plethora of pages that appear to sell the product... but they have no stock. (Wouldn't removing these pages make for a better internet?) No 2nd hand ones on eBay 😞 Do you have any suggestion about how I can get more relevant results... i.e find supplier that might still have stock? Any tips or trick I may be able to use to help me with this? Many thanks in advance to an awesome community 🙂 Isaac.
Algorithm Updates | | isaac6631 -
Does using parent pages in WordPress help with SEO and/or indexing for SERPs?
I have a law office and we handle four different practice areas. I used to have multiple websites (one for each practice area) with keywords in the actual domain name, but based on the recommendation of SEO "experts" a few years ago, I consolidated all the webpages into one single webpage (based on the rumors at the time that Google was going to be focusing on authorship and branding in the future, rather than keywords in URLs or titles). Needless to say, Google authorship was dropped a year or two later and "branding" never took off. Overall, having one webpage is convenient and generally makes SEO easier, but there's been a huge drawback: When my page comes up in SERPs after searching for "attorney" or "lawyer" combined with a specific practice area, the practice area landing pages don't typically come up in the SERPs, only the front page comes up. It's as if Google recognizes that I have some decent content, and Google knows that I specialize in multiple practice areas, but it directs everyone to the front page only. Prospective clients don't like this and it causes my bounce rate to be high. They like to land on a page focusing on the practice area they searched for. Two questions: (1) Would using parent pages (e.g. http://lawfirm.com/divorce/anytown-usa-attorney-lawyer/ vs. http://lawfirm.com/anytown-usa-divorce-attorney-lawyer/) be better for SEO? The research I've done up to this point appears to indicate "no." It doesn't make much difference as long as the keywords are in the domain name and/or URL. But I'd be interested to hear contrary opinions. (2) Would using parent pages (e.g. http://lawfirm.com/divorce/anytown-usa-attorney-lawyer/ vs. http://lawfirm.com/anytown-usa-divorce-attorney-lawyer/) be better for indexing in Google SERPs? For example, would it make it more likely that someone searching for "anytown usa divorce attorney" would actually end up in the divorce section of the website rather than the front page?
Algorithm Updates | | micromano0 -
Ahrefs - What Causes a Drastic Loss in Referring Pages?
While I was doing research on UK Flower companies I noticed that one particular domain had great rankings (top 3), but has slid quite a bit down to page two. After investigating further I noticed that they had a drastic loss of referring pages, but an increase in total referring domains. See this screenshot from ahrefs. I took a look at their historical rankings (got them from the original SEO provider's portfolio) and compared it to the Wayback Machine. There did not seem to be any drastic changes in the site structure. My question is what would cause such a dramatic loss in total referring pages while showing a dramatic increase in referring domains? It appears that the SEO company was trying rebound from the loss of links though. Any thoughts on why this might happen? 56VD5jD
Algorithm Updates | | AaronHenry0 -
Is it possible that Google may have erroneous indexing dates?
I am consulting someone for a problem related to copied content. Both sites in question are WordPress (self hosted) sites. The "good" site publishes a post. The "bad" site copies the post (without even removing all internal links to the "good" site) a few days after. On both websites it is obvious the publishing date of the posts, and it is clear that the "bad" site publishes the posts days later. The content thief doesn't even bother to fake the publishing date. The owner of the "good" site wants to have all the proofs needed before acting against the content thief. So I suggested him to also check in Google the dates the various pages were indexed using Search Tools -> Custom Range in order to have the indexing date displayed next to the search results. For all of the copied pages the indexing dates also prove the "bad" site published the content days after the "good" site, but there are 2 exceptions for the very 2 first posts copied. First post:
Algorithm Updates | | SorinaDascalu
On the "good" website it was published on 30 January 2013
On the "bad" website it was published on 26 February 2013
In Google search both show up indexed on 30 January 2013! Second post:
On the "good" website it was published on 20 March 2013
On the "bad" website it was published on 10 May 2013
In Google search both show up indexed on 20 March 2013! Is it possible to be an error in the date shown in Google search results? I also asked for help on Google Webmaster forums but there the discussion shifted to "who copied the content" and "file a DMCA complain". So I want to be sure my question is better understood here.
It is not about who published the content first or how to take down the copied content, I am just asking if anybody else noticed this strange thing with Google indexing dates. How is it possible for Google search results to display an indexing date previous to the date the article copy was published and exactly the same date that the original article was published and indexed?0 -
How much link juice does a sites homepage pass to inner pages and influence inner page rankings?
Hi, I have a question regarding the power of internal links and how much link juice they pass, and how they influence search engine ranking positions. If we take the example of an ecommerce store that sells kites. Scenario 1 It can be assumed that it is easier for the kite ecommerce store to earn links to its homepage from writing great content on its blog, as any blogger that will link to the content will likely use the site name, and homepage as anchor text. So if we follow this through, then it can be assumed that there will eventually be a large number of high quality backlinks pointing to the sites homepage from various high authority blogs that love the content being posted on the sites blog. The question is how much link juice does this homepage pass to the category pages, and from the category pages then to the product pages, and what influence does this have on rankings? I ask because I have seen strong ecommerce sites with very strong DA or domain PR but with no backlinks to the product page/category page that are being ranked in the top 10 of search results often, for the respective category and product pages. It therefore leads me to assume that internal links must have a strong determiner on search rankings... Could it therefore also be assumed that a site with a PR of 5 and no links to a specific product page, would rank higher than a site with a PR of 1 but with 100 links pointing to the specific product page? Assuming they were both trying to rank for the same product keyword, and all other factors were equal. Ie. neither of them built spammy links or over optimised anchor text? Scenario 2 Does internal linking work both ways? Whereas in my above example I spoke about the homepage carrying link juice downward to the inner category and product pages. Can a powerful inner page carry link juice upward to category pages and then the homepage. For example, say the blogger who liked the kite stores blog content piece linked directly to the blog content piece from his site and the kite store blog content piece was hosted on www.xxxxxxx.com/blog/blogcontentpiece As authority links are being built to this blog content piece page from other bloggers linking to it, will it then pass link juice up to the main blog category page, and then the kite sites main homepage? And if there is a link with relevant anchor text as part of the blog content piece will this cause the link juice flowing upwards to be stronger? I know the above is quite winded, but I couldn't find anywhere that explains the power of internal linking on SERP's... Look forward to your replies on this....
Algorithm Updates | | sanj50500 -
Google is forcing a 301 by truncating our URLs
Just recently we noticed that google has indexed truncated urls for many of our pages that get 301'd to the correct page. For example, we have:
Algorithm Updates | | mmac
http://www.eventective.com/USA/Massachusetts/Bedford/107/Doubletree-Hotel-Boston-Bedford-Glen.html as the url linked everywhere and that's the only version of that page that we use. Google somehow figured out that it would still go to the right place via 301 if they removed the html filename from the end, so they indexed just: http://www.eventective.com/USA/Massachusetts/Bedford/107/ The 301 is not new. It used to 404, but (probably 5 years ago) we saw a few links come in with the html file missing on similar urls so we decided to 301 them instead thinking it would be helpful. We've preferred the longer version because it has the name in it and users that pay attention to the url can feel more confident they are going to the right place. We've always used the full (longer) url and google used to index them all that way, but just recently we noticed about 1/2 of our urls have been converted to the shorter version in the SERPs. These shortened urls take the user to the right page via 301, so it isn't a case of the user landing in the wrong place, but over 100,000 301s may not be so good. You can look at: site:www.eventective.com/usa/massachusetts/bedford/ and you'll noticed all of the urls to businesses at the top of the listings go to the truncated version, but toward the bottom they have the full url. Can you explain to me why google would index a page that is 301'd to the right page and has been for years? I have a lot of thoughts on why they would do this and even more ideas on how we could build our urls better, but I'd really like to hear from some people that aren't quite as close to it as I am. One small detail that shouldn't affect this, but I'll mention it anyway, is that we have a mobile site with the same url pattern. http://m.eventective.com/USA/Massachusetts/Bedford/107/Doubletree-Hotel-Boston-Bedford-Glen.html We did not have the proper 301 in place on the m. site until the end of last week. I'm pretty sure it will be asked, so I'll also mention we have the rel=alternate/canonical set up between the www and m sites. I'm also interested in any thoughts on how this may affect rankings since we seem to have been hit by something toward the end of last week. Don't hesitate to mention anything else you see that may have triggered whatever may have hit us. Thank you,
Michael0