Noindex Pages indexed
-
I'm having problem that gogole is index my search results pages even though i have added the "noindex" metatag. Is the best thing to block the robot from crawling that file using robots.txt?
-
Thanks. Thats exactly what i needed.
-
Can you not just use href='/search.php' rel='canonical' /> in the head of all pages that serve the results?
Google will then index the original page people search from and ignore pages with query stringslike:
- /search.php?seo
- /search.php?ppc
-
Just be aware Google still has the URLs within their searches but without descriptions. They still 'index' the URL but don't actually crawl the pages.
-
Yep, that should stop them. Now just need time for Google to crawl.
You can also check WMT to see if the robots.txt is being read.
WMT > Health > Blocked URLs
-
i have added "Disallow:/search.php?" to the robots.txt file. Google seemed to be adding details to the form and then adding the results to the index. This means there are 100,000's of pages in the index that i dont want in there. Hopefully stopping the pages being crawled will help.
-
robots.txt will block it, noindex is still usually able to find via search if you look specifically for it.
User-agent: google-bot
disallow: /sampledir
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What would cause a sudden drop in indexed sitemap pages?
I have made no changes to my site for awhile and on 7/14 I had a 20% drop in indexed pages from the sitemap. However my total indexed pages has stayed the same. What would cause that?
Technical SEO | | EcommerceSite0 -
Anything new if determining how many of a sites pages are in Google's supplemental index vs the main index?
Since site:mysite.com *** -sljktf stopped working to find pages in the supplemental index several years ago has anyone found another way to identify content that has been regulated to the supplemental index?
Technical SEO | | SEMPassion0 -
Google dropping pages from SERPs even though indexed and cached. (Shift over to https suspected.)
Anybody know why pages that have previously been indexed - and that are still present in Google's cache - are now not appearing in Google SERPs? All the usual suspects - noindex, robots, duplication filter, 301s - have been ruled out. We shifted our site over from http to https last week and it appears to have started then, although we have also been playing around with our navigation structure a bit too. Here are a few examples... Example 1: Live URL: https://www.normanrecords.com/records/149002-memory-drawings-there-is-no-perfect-place Cached copy: http://webcache.googleusercontent.com/search?q=cache:https://www.normanrecords.com/records/149002-memory-drawings-there-is-no-perfect-place SERP (1): https://www.google.co.uk/search?q=memory+drawings+there+is+no+perfect+place SERP (2): https://www.google.co.uk/search?q=memory+drawings+there+is+no+perfect+place+site%3Awww.normanrecords.com Example 2: SERP: https://www.google.co.uk/search?q=deaf+center+recount+site%3Awww.normanrecords.com Live URL: https://www.normanrecords.com/records/149001-deaf-center-recount- Cached copy: http://webcache.googleusercontent.com/search?q=cache:https://www.normanrecords.com/records/149001-deaf-center-recount- These are pages that have been linked to from our homepage (Moz PA of 68) prominently for days, are present and correct in our sitemap (https://www.normanrecords.com/catalogue_sitemap.xml), have unique content, have decent on-page optimisation, etc. etc. We moved over to https on 11 Aug. There were some initial wobbles (e.g. 301s from normanrecords.com to www.normanrecords.com got caught up in a nasty loop due to the conflicting 301 from http to https) but these were quickly sorted (i.e. spotted and resolved within minutes). There have been some other changes made to the structure of the site (e.g. a reduction in the navigation options) but nothing I know of that would cause pages to drop like this. For the first example (Memory Drawings) we were ranking on the first page right up until this morning and have been receiving Google traffic for it ever since it was added to the site on 4 Aug. Any help very much appreciated! At the very end of my tether / understanding here... Cheers, Nathon
Technical SEO | | nathonraine0 -
Google Webmaster tools Sitemap submitted vs indexed vs Index Status
I'm having an odd error I'm trying to diagnose. Our Index Status is growing and is now up to 1,115. However when I look at Sitemaps we have 763 submitted but only 134 indexed. The submitted and indexed were virtually the same around 750 until 15 days ago when the indexed dipped dramatically. Additionally when I look under HTML improvements I only find 3 duplicate pages, and I ran screaming frog on the site and got similar results, low duplicates. Our actual content should be around 950 pages counting all the category pages. What's going on here?
Technical SEO | | K-WINTER0 -
Should I deindex my pages?
I recently changed the URLs on a website to make them tidier and easier to follow. I put 301s in place to direct all the previous page names to the new ones. However, I didn't read moz's guide which says I should leave the old sitemap online for a few weeks afterwards. As I result, webmaster tools is showing duplicate page titles (which means duplicate pages) for the old versions of the pages I have renamed. Since the old versions are no longer on the sitemap, google can no longer access them to find the 301s I have put in place. Is this a problem that will fix itself over time or is there a way to quicken up the process? I could use webmaster tools to remove these old urls, but I'm not sure if this is recommended. Alternatively, I could try and recreate the old sitemap, but this would take a lot of time.
Technical SEO | | maxweb0 -
GWT Malware notification for meta noindex'ed pages ?
I was wondering if GWT will send me Malware notification for pages that are tagged with meta noindex ? EG: I have a site with pages like example.com/indexed/content-1.html
Technical SEO | | Saijo.George
example.com/indexed/content-2.html
example.com/indexed/content-3.html
....
example.com/not-indexed/content-1.html
example.com/not-indexed/content-2.html
example.com/not-indexed/content-3.html
.... Here all the pages like the ones below, are tagged with meta noindex and does not show up in search.
example.com/not-indexed/content-1.html
example.com/not-indexed/content-2.html
example.com/not-indexed/content-3.html Now one fine day example.com/not-indexed/content-2.html page on the site gets hacked and starts to serve malware, none of the other pages are affected .. Will GWT send me a warning for this ? What if the pages are blocked by Robots.txt instead of meta noindex ? Regard
Saijo UPDATE hope this helps someone else : https://plus.google.com/u/0/109548904802332365989/posts/4m17sUtPyUS0 -
Pages to be indexed in Google
Hi, We have 70K posts in our site but Google has scanned 500K pages and these extra pages are category pages or User profile pages. Each category has a page and each user has a page. When we have 90K users so Google has indexed 90K pages of users alone. My question is. Should we leave it as they are or should we block them from being indexed? As we get unwanted landings to the pages and huge bounce rate. If we need to remove what needs to be done? Robots block or Noindex/Nofollow Regards
Technical SEO | | mtthompsons0 -
Backlinks Indexing
Is there a way of indexing my backlinks?? I have a lot backlinks but Google can't find them
Technical SEO | | CodePlus0