How can I get unimportant pages out of Google?
-
Hi Guys,
I have a (newbie) question, untill recently I didn't had my robot.txt written properly so Google indexed around 1900 pages of my site, but only 380 pages are real pages, the rest are all /tag/ or /comment/ pages from my blog. I now have setup the sitemap and the robot.txt properly but how can I get the other pages out of Google? Is there a trick or will it just take a little time for Google to take out the pages?
Thanks!
Ramon
-
If you want to remove an entire directory, you can exclude that directory in robots.txt, then go to Google Webmaster Tools and request a URL removal. You'll have an option to remove an entire directory there.
-
No, sorry. What I said is, if you mark the folder as disalow in robots.txt, it will not remove the pages are already indexed.
But the meta tag, when the spiders go again on the page and see that the pages are with the noindex tag will remove it.
Since you can not already include the directory on the robots.txt. Before removing the SE pages.
First you put the noindex tag on all pages you want to remove. After they are removed, it takes a week for a month. After you add the folders in robots.txt to your site who do not want to index.
After that, you dont need to worry about the tags.
I say this because when you add in the robots.txt first, the SE does not read the page anymore, so they would not read the meta noindex tag. Therefore you must first remove the pages with noindex tag and then add in robot.txt
Hope this has helped.
João Vargas
-
No, sorry. What I said is, if you mark the folder as disalow in robots.txt, it will not remove the pages are already indexed.
But the meta tag, when the spiders go again on the page and see that the pages are with the noindex tag will remove it.
Since you can not already include the directory on the robots.txt. Before removing the SE pages.
First you put the noindex tag on all pages you want to remove. After they are removed, it takes a week for a month. After you add the folders in robots.txt to your site who do not want to index.
After that, you dont need to worry about the tags.
I say this because when you add in the robots.txt first, the SE does not read the page anymore, so they would not read the meta noindex tag. Therefore you must first remove the pages with noindex tag and then add in robot.txt
Hope this has helped.
João Vargas
-
Thanks Vargas, If I choose for noindex, I should remove it from the robot.txt right?
I understood that if you have a noindex tag on the page and as well a dissallow in the robot.txt the SE will index it, is that true?
-
For you remove the pages you want, need to put a tag:
<meta< span="">name="robots" content="noindex">If you want internal links and external relevance to pass on these pages, you put:
<meta< span="">name="robots" content="noindex, follow">If you do the lock on robot.txt: only need to include the tag in the current urls, new search engines will index no.
In my opinion, I do not like using the google url remover. Because if someday you want to index these folders, will not, at least it has happened to me.
The noindex tag works very well to remove objectionable content, within 1 month or so now will be removed.</meta<></meta<>
-
Yes. It's only a secondary level aid, and not guaranteed, yet it could help speed up the process of devaluing those pages in Google's internal system. If the system sees those, and cross-references to the robots.txt file it could help.
-
Thanks guys for your answers....
Alan, do you mean that I place the tag below at all the pages that I want out of Google? -
I agree with Alan's reply. Try canonical 1st. If you don't see any change, remove the URLs in GWT.
-
There's no bulk page request form so you'd need to submit every URL one at a time, and even then it's not a guaranteed way. You could consider gettting a canonical tag on those specific pages that provides a different URL from your blog, such as an appropriate category page, or the blog home page. That could help speed things up, but canonical tags themselves are only "hints" to Google.
Ultimately it's a time and patience thing.
-
It will take time, but you can help it along by using the url removal tool in Google Webmaster Tools. https://www.google.com/webmasters/tools/removals
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google has deindexed a page it thinks is set to 'noindex', but is in fact still set to 'index'
A page on our WordPress powered website has had an error message thrown up in GSC to say it is included in the sitemap but set to 'noindex'. The page has also been removed from Google's search results. Page is https://www.onlinemortgageadvisor.co.uk/bad-credit-mortgages/how-to-get-a-mortgage-with-bad-credit/ Looking at the page code, plus using Screaming Frog and Ahrefs crawlers, the page is very clearly still set to 'index'. The SEO plugin we use has not been changed to 'noindex' the page. I have asked for it to be reindexed via GSC but I'm concerned why Google thinks this page was asked to be noindexed. Can anyone help with this one? Has anyone seen this before, been hit with this recently, got any advice...?
Technical SEO | | d.bird0 -
Can google bots read my internal post links if they are all listed in a javascript accordian where I list my sources?
I post a JavaScript accordion drop down tab [ a collapsible content area ] at the end of all my posts. I labeled the accordion "Show Article Sources"., and when a user clicks it, then the accordion expands open and it shows all the sources I listed for my article. And this is where I post all of my articles links that I reference per each article. But I read somewhere that google crawlers can not read text in a drop down JavaScript tab. So I am wondering now if this is true because that would mean I have no internal linking SEO going on since it cant read the links? ..... if it is true, then I should remove the accordion from all my articles and some how include the links I reference in the actual body text so I can get SEO benefits from external linking similar content? If that's true, what is an aesthetic way to do this, any example links? Tips ? Thoughts ?
Technical SEO | | ianizaguirre0 -
Site splitting value of our pages with multiple variations. How can I fix this with the least impact?
Just started at a company recently, and there is a preexisting problem that I could use some help with. Somebody please tell me there is a low impact fix for this: My company's website is structured so all of the main links used on the nav are listed as .asp pages. All the canonical stuff. However, for "SEO Purposes," we have a number of similar (not exact) pages in .html on the same topic on our site. So, for example, let's say we're a bakery. The main URL, as linked in the nav, for our Chocolate Cakes, would be http://www.oursite.com/chocolate-cakes.asp. This differentiates the page from our other cake varieties, such as http://www.oursite.com/pound-cakes.asp and http://www.oursite.com/carrot-cakes.asp. Alas, fully indexed in Google with links existing only in our sitemap, we also have: http://www.oursite.com/chocolate-cakes.html http://www.oursite.com/chocolatecakes.html http://www.oursite.com/cakes-chocolate.html This seems CRAZY to me, because wouldn't this split our search results 4 ways? Am I right in assuming this is destroying the rankings of our canonical pages? I want to change this, but problem is, none of the content is the same on any of the variants, and some of these pages rank really well - albeit mostly for long tail keywords instead of the good, solid keywords we're after. So, what I'm asking you guys is: How do I burn these .html pages to the ground without completely destroying our rankings for the other keywords? I want to 301 those pages to our canonical nav URLs but, because of the wildly different content, I'm afraid that we could see a heavy drop in search traffic. Am I just being overly cautious? Thanks in advance!
Technical SEO | | jdsnyc20 -
301'd site, but new site is not getting picked up in google.
Hi I'm having big issues! Any help would be greatly appreciated This is the 3rd time this happened. Every time I switch my old site greatcleanjokes.com to the new design of chokeonajoke.com traffic goes almost completely down (I even tried out the new design on greatcleanjokes [to see if it was a 301 issue] and traffic also went down.) What can possibly be wrong with this new site that google just doesn't like it ?! I was ranking high up for many big phrase like joke of the day, corny jokes, clean jokes, short jokes. Now It's all gone. I also think it's strange that when I search for site:chokeonajoke.com the post pages show up before the category pages!? Here is the old site http://web.archive.org/web/20140406214615/http://www.greatcleanjokes.com/ Here is the new one http://chokeonajoke.com/ If you can't figure out anything do you know of anyone I can hire who may be able to figure it out?
Technical SEO | | Nickys22111 -
Is this okay with google if i can access my sub categories from two different path?
My website is url is abcd.com. One of my category url is abcd.com/mobile.aspx. Which contains 5 sub categories :- samung Mobile 2) Nokia Mobile 3) Sony Mobile 4) HTC Mobile 5) Blackberry Mobile Now if i go in to HTC Mobile sub categories i.e. abcd.com/htcmobile.aspx here i will see all the product related to HTC Mobile. But at below of all product i will find all sub categories that is samsung mobile, nokia mobile, sony mobile and blackberry mobile. So i want to task is this okay? Google will not count these categories as duplicate that is i can access all 4 categories i.e. samsung, nokia, sony and blackberry from here 1) abcd.com/mobile.aspx and 2) abcd.com/htcmobile.aspx Thanks! Dev
Technical SEO | | devdan0 -
Product pages getting no internal links in Magento
Hello I think i have a serious problem. Most of my products are not getting internal links.
Technical SEO | | macrovet
I discoverd this when i was running a Crawl Test Tool Report | Moz Here an example of one product.
This product can be navigate to a normal way true the navigation structure on my website. The navigation is http://www.macrovet.nl/scheermachine/scheerapparaat-paard-paardenscheermachine.html
On this page is the product URL: http://www.macrovet.nl/aesculap-econom-equipe-gt674.html
Time Crawled 2014
Title tag: Aesculap Econom Equipe GT674 | Macrovet.nl
Meta Description: Bekijk en bestel een Aesculap Econom Equipe GT674 paardenscheermachine voor de scherpste prijs Macrovet.nl
HTTP Status Code: 200
Referrer http://www.macrovet.nl/sitemap.xml
Link Count: 550
Content-Type Header: text/html; charset=UTF-8
4XX (Client Error): NO
5XX (Server Error): NO
Title Missing or Empty: No
Duplicate Page Content: NO
URLs with Duplicate Page Content (up to 5)
Duplicate Page Title:No
Long URL NO
Overly-Dynamic URL NO
301 (Permanent Redirect) NO
302 (Temporary Redirect) NO
301/302 Target
Meta Refresh NO
Meta Refresh Target
Title Element Too Short NO
Title Element Too Long No
Too Many On-Page Links YES
Missing Meta Description Tag No
Search Engine blocked by robots.txt No
Meta-robots Nofollow No
Meta Robots Tag INDEX,FOLLOW
Rel Canonical Yes
Rel-Canonical Target http://www.macrovet.nl/aesculap-econom-equipe-gt674.html
Blocking All User Agents No
Blocking Google No
Internal Links 0
Linking Root Domains 0
External Links 0
Page Authority 1 Domain Autority 30 Do you have an answer what is wrong, thanks for your answers Regards,
Willem-Johan0 -
How do I influence what page on my site google shows for specific search phrases?
Hi People, My client has a site www.activeadventures.com. They provide adventure tours of New Zealand, South America and the Himalayas. These destinations are split into 3 folders in the site (eg: activeadventures.com/new-zealand, activeadventures.com/south-america etc....). The actual root folder of the site is generic information for all of the destinations whilst the destination specific folders are specific in their information for the destination in question. The Problem: If you search for say "Active New Zealand" or "Adventure Tours South America" our result that comes up is the activeadventures.com homepage rather than the destination folder homepage (eg: We would want activeadventures.com/new-zealand to be the landing page for people searching for "active new zealand"). Are there any ways in influence google as to what page on our site it chooses to serve up? Many thanks in advance. Conrad
Technical SEO | | activenz0 -
Google Dropping Pages After SEO Clean Up
I have been using SEOmoz to clear errors from a site. There
Technical SEO | | Andy56
were over 10,000 errors to start with. Most of these were duplicate content, duplicate titles and too many links on a page. Most of the duplicate errors have now been
cleared. This has been done in two weeks (down to around 3000 errors now). But instead of improving my rankings, pages that were on the second page of Google have started to drop out of the listings altogether. The pages that are dropping out
are not related to the duplicate problems and get A grades when I run SEOmoz
page reports. Can you clean up too much too quickly or is there likely to be another reason for it?0