How can I get unimportant pages out of Google?
-
Hi Guys,
I have a (newbie) question, untill recently I didn't had my robot.txt written properly so Google indexed around 1900 pages of my site, but only 380 pages are real pages, the rest are all /tag/ or /comment/ pages from my blog. I now have setup the sitemap and the robot.txt properly but how can I get the other pages out of Google? Is there a trick or will it just take a little time for Google to take out the pages?
Thanks!
Ramon
-
If you want to remove an entire directory, you can exclude that directory in robots.txt, then go to Google Webmaster Tools and request a URL removal. You'll have an option to remove an entire directory there.
-
No, sorry. What I said is, if you mark the folder as disalow in robots.txt, it will not remove the pages are already indexed.
But the meta tag, when the spiders go again on the page and see that the pages are with the noindex tag will remove it.
Since you can not already include the directory on the robots.txt. Before removing the SE pages.
First you put the noindex tag on all pages you want to remove. After they are removed, it takes a week for a month. After you add the folders in robots.txt to your site who do not want to index.
After that, you dont need to worry about the tags.
I say this because when you add in the robots.txt first, the SE does not read the page anymore, so they would not read the meta noindex tag. Therefore you must first remove the pages with noindex tag and then add in robot.txt
Hope this has helped.
João Vargas
-
No, sorry. What I said is, if you mark the folder as disalow in robots.txt, it will not remove the pages are already indexed.
But the meta tag, when the spiders go again on the page and see that the pages are with the noindex tag will remove it.
Since you can not already include the directory on the robots.txt. Before removing the SE pages.
First you put the noindex tag on all pages you want to remove. After they are removed, it takes a week for a month. After you add the folders in robots.txt to your site who do not want to index.
After that, you dont need to worry about the tags.
I say this because when you add in the robots.txt first, the SE does not read the page anymore, so they would not read the meta noindex tag. Therefore you must first remove the pages with noindex tag and then add in robot.txt
Hope this has helped.
João Vargas
-
Thanks Vargas, If I choose for noindex, I should remove it from the robot.txt right?
I understood that if you have a noindex tag on the page and as well a dissallow in the robot.txt the SE will index it, is that true?
-
For you remove the pages you want, need to put a tag:
<meta< span="">name="robots" content="noindex">If you want internal links and external relevance to pass on these pages, you put:
<meta< span="">name="robots" content="noindex, follow">If you do the lock on robot.txt: only need to include the tag in the current urls, new search engines will index no.
In my opinion, I do not like using the google url remover. Because if someday you want to index these folders, will not, at least it has happened to me.
The noindex tag works very well to remove objectionable content, within 1 month or so now will be removed.</meta<></meta<>
-
Yes. It's only a secondary level aid, and not guaranteed, yet it could help speed up the process of devaluing those pages in Google's internal system. If the system sees those, and cross-references to the robots.txt file it could help.
-
Thanks guys for your answers....
Alan, do you mean that I place the tag below at all the pages that I want out of Google? -
I agree with Alan's reply. Try canonical 1st. If you don't see any change, remove the URLs in GWT.
-
There's no bulk page request form so you'd need to submit every URL one at a time, and even then it's not a guaranteed way. You could consider gettting a canonical tag on those specific pages that provides a different URL from your blog, such as an appropriate category page, or the blog home page. That could help speed things up, but canonical tags themselves are only "hints" to Google.
Ultimately it's a time and patience thing.
-
It will take time, but you can help it along by using the url removal tool in Google Webmaster Tools. https://www.google.com/webmasters/tools/removals
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to get into Google's Tops Stories?
Hi All, I have been doing research for a few weeks and I cannot for the life of me figure out why I cannot get my website (Racenet) into the top stories in Google. We are in Google News, have "news article" schema, have AMP pages. Our news articles also perform quite well organically and we typically dominate the Google News section. We have two main competitors (Punters and Just Horse Racing) who are both in top stories and I cannot find anything that we are doing that they aren't. Apparently the AMP "news article" schema is incorrect and that could be the reason why we aren't showing up in Google Top Stories, but I can't find anything wrong with the schema and it looks the same as our competitors. For example: https://search.google.com/structured-data/testing-tool/u/0/#url=https%3A%2F%2Fwww.racenet.com.au%2Fnews%2Fblake-shinn-booked-to-ride-doncaster-handicap-favourite-alizee-20190331%3FisAmp%3D1 Does anyone have any ideas of why I cannot get my site into Google Top Stories? Any and all help would be greatly appreciated. Thanks! 🙂
Technical SEO | | Saba.Elahi.M.0 -
How can a keyword placed on a page with the Moz page optimization score of 100 be ranked #51+?
Hi, Please help me figure out why this is happening and what goes wrong. This is the example of the poor ranked keyword - 'viking cooktop repair' with page optimization score of 100 (http://www.yourappliancerepairla.com/blog/viking-cooktop-repair/) Yet it's ranking is #51+. I've got many like these: Page Optimization Score for 'kitchenaid oven repair' is 100 (http://www.yourappliancerepairla.com/blog/kitchenaid-oven-repair/) yet its ranking is #51+ And so on. According to Google Search Console, I have 266 of links to my site with variety of root domains. While building backlinks, I paid attention to relevancy and DA.What else do I have to do to get those keywords ranked higher? And why don't they rank well if the pages are 100% optimized, not keywords stuffed and I have quality backlinks? What am I missing out on? Please help!
Technical SEO | | kirupa1 -
Webmaster tools not showing links but Moz OSE is showing links. Why can't I see them in the Google Search Console
Hi, Please see attached photos. I have a website that shows external follow links when performing a search on open site explorer. However, they are not recognised or visible in search console. This is the case for both internal and external links. The internal links are 'no follow' which I am getting developer to rectify. Any ideas why I cant see the 'follow' external links? Thanks in advance to those who help me out. Jesse T7dkL5s T7dkL5s OkQmPL4 3qILHqS
Technical SEO | | jessew0 -
Using the Google Remove URL Tool to remove https pages
I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week. I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front. For example, I add to the removal tool:- https://www.mydomain.com/blah.html?search_garbage_url_addition On the confirmation page, the URL actually shows as:- http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look? AND PART 2 OF MY QUESTION If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request? www.domain.com/url.html?xsearch_... A description for this result is not available because of this site's robots.txt – learn more.
Technical SEO | | sparrowdog1 -
Google showing https:// page in search results but directing to http:// page
We're a bit confused as to why Google shows a secure page https:// URL in the results for some of our pages. This includes our homepage. But when you click through it isn't taking you to the https:// page, just the normal unsecured page. This isn't happening for all of our results, most of our deeper content results are not showing as https://. I thought this might have something to do with Google conducting searches behind secure pages now, but this problem doesn't seem to affect other sites and our competitors. Any ideas as to why this is happening and how we get around it?
Technical SEO | | amiraicaew0 -
Cached pages still showing on Google
We noticed our QA site showing up on Google so we blocked them in our robot.txt file. We still had an issue with them crawling it so we blocked the site from the public. Now Google is still showing a cached version from the first week in March. Do we just have to wait until they try to re-crawl the site to clear this out or is there a better way to try and get these pages removed from results?
Technical SEO | | aspenchicago0 -
We have set up 301 redirects for pages from an old domain, but they aren't working and we are having duplicate content problems - Can you help?
We have several old domains. One is http://www.ccisound.com - Our "real" site is http://www.ccisolutions.com The 301 redirect from the old domain to the new domain works. However, the 301-redirects for interior pages, like: http://www.ccisolund.com/StoreFront/category/cd-duplicators do not work. This URL should redirect to http://www.ccisolutions.com/StoreFront/category/cd-duplicators but as you can see it does not. Our IT director supplied me with this code from the HT Access file in hopes that someone can help point us in the right direction and suggest how we might fix the problem: RewriteCond%{HTTP_HOST} ccisound.com$ [NC] RewriteRule^(.*)$ http://www.ccisolutions.com/$1 [R=301,L] Any ideas on why the 301 redirect isn't happening? Thanks all!
Technical SEO | | danatanseo0 -
Can Google read text in Javascript?
We have just completed the redesign of our product page, which you can see here: http://www.uksoccershop.com/p-19045/2011-12-Chelsea-Adidas-Away-Football-Shirt.html Because we want the select size / add to basket section to appear prominently, you can see we are showing only a snippet of the product description in this section and then user has to click "more" to see it. My question is, can Google read the product description here since it's in Javascript? The code is as follows: 2011-12 Chelsea Adidas Away Football Shirt £44.99 Item Code:379606 Brand new, official Chelsea away shirt for the 2011/12 Premiership season, available to buy in adult sizes S, M, L, XL, XXL, XXXL. This football shirt is manufactured by Adidas and is black in colour.[ More...](javascript:void(0);) Brand new, official Chelsea away shirt for the 2011/12 Premiership season, available to buy in adult sizes S, M, L, XL, XXL, XXXL. This football shirt is manufactured by Adidas and is black in colour. Cheer on the Blues in style in the new adidas Chelsea Away Shirt, featuring a striking blue blocked design on an imposing black background complete with the club crest and adidas logo embroidery across the chest for a great style on or off the pitch. The new Chelsea Away Shirt is designed with adidas' ClimaCool technology to bring moisture away from your skin, keeping you cool, comfortable and performing at your best as you emulate the skills of Frank Lampard, Fernando Torres and John Terry on the pitch. Customise your shirt with Premiership shirt printing for your favourite Chelsea stars or choose your own custom name and number. Adult Football Shirt
Technical SEO | | ukss1984
Short sleeves soccer jersey
Chelsea club crest to left chest
adidas logo and stripes
Print sponsor to centre
ClimaCool technology
Machine washable Product code: 379606 The 2011/12 Chelsea away football kit is released on 7th July 2011. <form name="currenychange" action="http://www.uksoccershop.com/p-19045/2011-12-Chelsea-Adidas-Away-Football-Shirt.html" method="get">
<select class="topselectbox" onchange="this.form.submit();" name="currency" style="float:right;"> <option value="USD">US Dollars</option> <option value="EUR">Euro</option> <option value="GBP" selected="selected">UK Sterling</option> <option value="AUD">Australian Dollars</option> </select>
</form> Available Now [Be the first to ask a question](javascript:void(0); "Ask a Question")
[Be the first to review this product](javascript://) Rating: 5 out of 5 stars <form name="cart_quantity" action="http://www.uksoccershop.com/p-19045/2011-12-Chelsea-Adidas-Away-Football-Shirt.html?number_of_uploads=0&action=add_product" method="post" enctype="multipart/form-data"> Which parts of this is Google going to be able to read? Should we make the product title our H1 header for this page and can it currently read that within the code above? </form>0