How can I get unimportant pages out of Google?
-
Hi Guys,
I have a (newbie) question, untill recently I didn't had my robot.txt written properly so Google indexed around 1900 pages of my site, but only 380 pages are real pages, the rest are all /tag/ or /comment/ pages from my blog. I now have setup the sitemap and the robot.txt properly but how can I get the other pages out of Google? Is there a trick or will it just take a little time for Google to take out the pages?
Thanks!
Ramon
-
If you want to remove an entire directory, you can exclude that directory in robots.txt, then go to Google Webmaster Tools and request a URL removal. You'll have an option to remove an entire directory there.
-
No, sorry. What I said is, if you mark the folder as disalow in robots.txt, it will not remove the pages are already indexed.
But the meta tag, when the spiders go again on the page and see that the pages are with the noindex tag will remove it.
Since you can not already include the directory on the robots.txt. Before removing the SE pages.
First you put the noindex tag on all pages you want to remove. After they are removed, it takes a week for a month. After you add the folders in robots.txt to your site who do not want to index.
After that, you dont need to worry about the tags.
I say this because when you add in the robots.txt first, the SE does not read the page anymore, so they would not read the meta noindex tag. Therefore you must first remove the pages with noindex tag and then add in robot.txt
Hope this has helped.
João Vargas
-
No, sorry. What I said is, if you mark the folder as disalow in robots.txt, it will not remove the pages are already indexed.
But the meta tag, when the spiders go again on the page and see that the pages are with the noindex tag will remove it.
Since you can not already include the directory on the robots.txt. Before removing the SE pages.
First you put the noindex tag on all pages you want to remove. After they are removed, it takes a week for a month. After you add the folders in robots.txt to your site who do not want to index.
After that, you dont need to worry about the tags.
I say this because when you add in the robots.txt first, the SE does not read the page anymore, so they would not read the meta noindex tag. Therefore you must first remove the pages with noindex tag and then add in robot.txt
Hope this has helped.
João Vargas
-
Thanks Vargas, If I choose for noindex, I should remove it from the robot.txt right?
I understood that if you have a noindex tag on the page and as well a dissallow in the robot.txt the SE will index it, is that true?
-
For you remove the pages you want, need to put a tag:
<meta< span="">name="robots" content="noindex">If you want internal links and external relevance to pass on these pages, you put:
<meta< span="">name="robots" content="noindex, follow">If you do the lock on robot.txt: only need to include the tag in the current urls, new search engines will index no.
In my opinion, I do not like using the google url remover. Because if someday you want to index these folders, will not, at least it has happened to me.
The noindex tag works very well to remove objectionable content, within 1 month or so now will be removed.</meta<></meta<>
-
Yes. It's only a secondary level aid, and not guaranteed, yet it could help speed up the process of devaluing those pages in Google's internal system. If the system sees those, and cross-references to the robots.txt file it could help.
-
Thanks guys for your answers....
Alan, do you mean that I place the tag below at all the pages that I want out of Google? -
I agree with Alan's reply. Try canonical 1st. If you don't see any change, remove the URLs in GWT.
-
There's no bulk page request form so you'd need to submit every URL one at a time, and even then it's not a guaranteed way. You could consider gettting a canonical tag on those specific pages that provides a different URL from your blog, such as an appropriate category page, or the blog home page. That could help speed things up, but canonical tags themselves are only "hints" to Google.
Ultimately it's a time and patience thing.
-
It will take time, but you can help it along by using the url removal tool in Google Webmaster Tools. https://www.google.com/webmasters/tools/removals
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sudden issue: no google cache for any product detail pages and offer schema missing from SERPS
Absolutely no idea what is going on. All of our category / subcategory and other support pages are indexed and cached as normal, but suddenly none of our product pages are cached, and all of the product / offer schema snippets have been dropped from the serps as well (price, review count, average rating etc). When I inspect a product detail page url in GSC, I am either getting errors or it is returned as a soft 404. There have been no recent changes to our website that are obvious culprits. When I request indexing, it works fine for non-product pages, but generates the "Something went wrong
Technical SEO | | jamestown
If the issue persists, try again in a few hours" message for any product page submitted. We are not SEO novices. This is an Angular 7 site with a Universal version launched back in October (new site, same domain), and until this strange issue cropped up we'd enjoyed steady improvement of rankings and GSC technical issues. Has anyone seen anything like this? We are seeing rapid deterioration in rankings overnight for all product detail pages due to this issue. A term / page combination that ranked for over a decade in the top 10 lost 10 places overnight... There's just no obvious culprit. Using chrome dev tools to view as googlebot, everything is kosher. No weird redirects, no errors, returns 200 and page loads. Thank You0 -
Single page website vs Google
Hi, I was wondering on this issue: There is a website for guesthouse. It has all information on one page (it is a valid page, with legitimate content). How google treats those pages? Would it treat it as Doorway Page? Or give some other penalties? What about a bounce rate? Because it will be pretty high, as there is no option to go somewhere else? What is your opinion on single page websites - SEO wise? Is it a shot in the foot? Thanks!
Technical SEO | | LeszekNowakowski0 -
Hybrid page showing in Google search results
Hello Mozzers We have two pages showing on page 1 of Google for the search term 'inset day sessions' This url is the correct page which we want site visitors to see. http://www.laughology.co.uk/teacher-workshop-s-inset-days/inset-days The other page page seems to be a strange hybrid of how the page used to look and the new content we have included. It's a mess and we don't want visitors clicking on this link. There is no menu link to this page on the site, but it is showing as a link In SH404sef http://www.laughology.co.uk/schools/teacher-workshop-s-inset-days/ What is the best way to deal with this? Thanks Ian nKOHYbn
Technical SEO | | Substance-create0 -
Google + and Google Knoladge Graph
I am trying to get things to match up for the company brand websearch and the Google + page and we have had it for years now The knowledge graph on Google is showing the map, address and name (shown in attached image), but is not linked to a G+ page, as when i click the "Are you the business owner?" its is trying to make me create a new G+ business page. Anyone have any ideas on this? Also does the wiki name have to be exact for it to show? As for phone number would that be coming from the DNS record as that is nowhere in the markup rich snippet or normal markup Thanks in advance LC9cWdG
Technical SEO | | David-McGawn0 -
404 not found page appears as 200 success in Google Fetch. What to do to correct?
We have received messages in Google webmaster tools that there is an increase in soft 404 errors. When we check the URLs they send to the 404 not found page:
Technical SEO | | Madlena
For example, http://www.geographics.com/images/01904_S.jpg
redirects to http://www.geographics.com/404.shtml.
When we used fetch as Google, here is what we got: .
#1 Server Response: http://www.geographics.com/404.shtml
HTTP/1.1 200 OK Date: Thu, 26 Sep 2013 14:26:59 GMT
What is wrong and what shall we do? The soft 404 errors are mainly for images that no longer exist in the server. Thanks!0 -
Does google like Category pages or pages with lots of Products on them?
We are having an issue with getting Google to rank the page we want. To have this page http://www.jakewilson.com/c/52/-/346/Cruiser-Motorcycle-Tires rank for the key word Cruiser Motorcycle Tires; however, this page http://www.jakewilson.com/t/52/-/343/752/Cruiser-Motorcycle-Tires is ranking instead and it has less links and page authority according to site explorer and it is farther down in the hierarchy. I am wondering if google just likes pages that have actual products on them instead of a page leading to the page with all the products. Thoughts?
Technical SEO | | DoRM0 -
Getting on to page 1 again
I have recently set up a website, and as soon as it got indexed by google, I ranked on the first page at number 10... However, Ever since I have started trying to backlink.. the more I seem to do, the more I seem to drop down the rankings! The same for both Bing and Yahoo... I am really not sure what the problem is. My site is www.arilinegamesatc.com, and the ranking words are airline games and airline game. Any help woudl be greatly appreciated!
Technical SEO | | rolls1230 -
How to use overlays without getting a Google penalty
One of my clients is an email subscriber-led business offering deals that are time sensitive and which expire after a limited, but varied, time period. Each deal is published on its own URL and in order to drive subscriptions to the email, an overlay was implemented that would appear over the individual deal page so that the user was forced to subscribe if they wished to view the details of the deal. Needless to say, this led to the threat of a Google penalty which _appears (fingers crossed) _to have been narrowly avoided as a result of a quick response on our part to remove the offending overlay. What I would like to ask you is whether you have any safe and approved methods for capturing email subscribers without revealing the premium content to users before they subscribe? We are considering the following approaches: First Click Free for Web Search - This is an opt in service by Google which is widely used for this sort of approach and which stipulates that you have to let the user see the first item they click on from the listings, but can put up the subscriber only overlay afterwards. No Index, No follow - if we simply no index, no follow the individual deal pages where the overlay is situated, will this remove the "cloaking offense" and therefore the risk of a penalty? Partial View - If we show one or two paragraphs of text from the deal page with the rest being covered up by the subscribe now lock up, will this still be cloaking? I will write up my first SEOMoz post on this once we have decided on the way forward and monitored the effects, but in the meantime, I welcome any input from you guys.
Technical SEO | | Red_Mud_Rookie0