Google indexing site content that I did not wish to be indexed
-
Hi is it pretty standard for Google to index content that you have not specifically asked them to index i.e. provided them notification of a page's existence.
I have just been alerted by 'Mention' about some new content that they have discovered, the page is on our site yes and may be I should have set it to NO INDEX but the page only went up a couple of days ago and I was making it live so that someone could look at it and see how the page was going to look in its final iteration. Normally we go through the usual process of notifying Google via GWMT, adding it to our site map.xml file, publishing it via our G+ stream and so on.
Reviewing our Analytics it looks like there has been no traffic to this page yet and I know for a fact there are no links to this page. I am surprised at the speed of the indexation, is it a example of brand mention? Where an actual link is now no longer required?
Cheers
David
-
Thanks Candyman, yes this is not a question about to prevent Google for not indexing my content, I know this very well. It is more about how quick they have done this with the least amount of effort on our part to inform them.
Plus it is quite an interesting situation you found yourself in, never heard of this before.
Many thanks
David
-
Hi David-
We had a similar situation recently where we had a dev site and forgot to no-index it and actually started to appear in the SERPS. After a bit of puzzling it LOOKS like Google found (or at least indexed) the pages as a function of us being logged into our Google accounts when viewing them. We did not do extensive testing on this, its mostly anecdotal but ti did look like it was true. Maybe we'll do the experiment one day to be sure!
Ken
-
Google is constantly indexing and viewing your website. Why go through the other steps? To ensure that your new page isn't overlooked. While you don't necessarily need to tell Google to index in GWT - your site map should automatically update, and if referenced in the robots.txt file than the new page will be found without issue.
Now, again if you don't want a page indexed and it has links than you need to do the noindex / no follow on the page, as the robots.txt can be over-ruled.
-
Hi Samuel,
Thanks for replying but no I'm not asking that, this I know how to do. The question is about whether this could be seen as an example of page indexation where on my part there has been no explicit activity to inform Google of the content's existence and there are no links to it yet Google is still managing to index it. Why bother informing Google vIA some of the activities mentioned earlier when they will just index it anyway you know.
Thanks
David
-
Are you asking how to prevent certain pages from appearing in search results? If so, I'd review Moz's guide to robots.
Specifically, I'd recommend the use of both the noindex meta tag and the robots.txt file. Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404s in Google Search Console and javascript
The end of April, we made the switch from http to https and I was prepared for a surge in crawl errors while Google sorted out our site. However, I wasn't prepared for the surge in impossibly incorrect URLs and partial URLs that I've seen since then. I have learned that as Googlebot grows up, he'she's now attempting to read more javascript and will occasionally try to parse out and "read" a URL in a string of javascript code where no URL is actually present. So, I've "marked as fixed" hundreds of bits like /TRo39,
Algorithm Updates | | LizMicik
category/cig
etc., etc.... But they are also returning hundreds of otherwise correct URLs with a .html extension when our CMS system generates URLs with a .uts extension like this: https://www.thompsoncigar.com/thumbnail/CIGARS/90-RATED-CIGARS/FULL-CIGARS/9012/c/9007/pc/8335.html
when it should be:
https://www.thompsoncigar.com/thumbnail/CIGARS/90-RATED-CIGARS/FULL-CIGARS/9012/c/9007/pc/8335.uts Worst of all, when I look at them in GSC and check the "linked from" tab it shows they are linked from themselves, so I can't backtrack and find a common source of the error. Is anyone else experiencing this? Got any suggestions on how to stop it from happening in the future? Last month it was 50 URLs, this month 150, so I can't keep creating redirects and hoping it goes away. Thanks for any and all suggestions!
Liz Micik0 -
How do I figure out what's wrong with my site?
I'm fairly new to SEO and can't pinpoint what's wrong with my site...I feel so lost. I am working on revamping www.RiverValleyGroup.com and can't figure out why it's not ranking for keywords. These keywords include 'Louisville homes', 'Homes for sale in Louisville KY', etc. Any suggestions? I write new blog posts everyday so I feel there's no shortage of fresh content. I'm signed up with Moz Analytics and Google analytics
Algorithm Updates | | gohawks77900 -
Am I the only one experiencing this Google SERP problem?
I perform Google searches every single day, sometimes several times in a day. These searches have nothing to do with being a marketer--they're simply as a consumer, researcher, person who needs a question answered, or in other words: a typical person. For about the past month or so, I have been unsuccessful at finding what I'm looking for on the first try EVERY SINGLE TIME. Yes, I mean it--every single time. I'm left either going all the way to the third page, clicking dozens of results and retuning to the SERPs, or having to start over with a differently worded query. This is far too often to be a coincidence. Has this been happening to anymore else? I know there was a recent significant algorithm update, right? I always look at algorithm updates through the eyes of an SEO, but I'm currently looking at it through the eyes for an average searcher, and I'm frustrated! It's been like trying to find something on Bing!
Algorithm Updates | | UnderRugSwept0 -
Google visits falling at the expense of Bing
Has anyone else noticed their percentage of search visits from Google slipping in the last few weeks at the expense of Bing? We've seen a 4% swing in the last month. Obviously Google is still the dominant presence (acconuting for 88.4% of all organic visits to our site kenwoodtravel.co.uk) but still it would be interesting to know if this is just a blip or more of a trend?
Algorithm Updates | | BrettCollins0 -
Ranking Drop After Switching Sites
I have a client who's rankings dropped after switching to out site. We know that rankings can drop a little after switching, but we are concerned that hers are still low. Any suggestions? As far as I can tell, the links to her site remained the same. Thanks Holly
Algorithm Updates | | hwade1 -
Google+ Local Optimization
What are the recommended ways to optimize the Google+ places page for clients. Do services like louder voice and customer lobby help? I'd love to get the group's opinion on what strategies are working for them on local optimization.
Algorithm Updates | | SEO5Team0 -
Should I block non-informative pages from Google's index?
Our site has about 1000 pages indexed, and the vast majority of them are not useful, and/or contain little content. Some of these are: -Galleries
Algorithm Updates | | UnderRugSwept
-Pages of images with no text except for navigation
-Popup windows that contain further information about something but contain no navigation, and sometimes only a couple sentences My question is whether or not I should put a noindex in the meta tags. I think it would be good because the ratio of quality to low quality pages right now is not good at all. I am apprehensive because if I'm blocking more than half my site from Google, won't Google see that as a suspicious or bad practice?1 -
Duplicate content penalisation?
Hi We are pulling in content snippets from our product blog to our category listing pages on our ecommerce site to provide fresh, relevant content which is working really well. What I am wondering is if we are going to get penalised for dupicate content as both our our blog and ecommerce site are on the same ip address? If so would moving the blog to a separate server and / or a separate domain name be a wise move? Thanks very much
Algorithm Updates | | libertybathrooms0