How to remove my site's pages in search results?
-
I have tested hundreds of pages to see if Google will properly crawl, index and cached them. Now, I want these pages to be removed in Google search except for homepage. What should be the rule in robots.txt?
I use this rule, but I am not sure if Google will remove the hundreds of pages (for my testing).
User-agent: *
Disallow: /
Allow: /$ -
Why not just 404/410 those pages?
-
Hi Matt! I've already tried your suggestion. I'll let you know what's the result. Thanks a lot man!
-
why don't you try adding a meta robots tag on those pages with "NOINDEX".
i would also do remove url with WMT
-
These are just test pages and I need them to be private and not visible in Google after I test. I understand that there will be a drop in SERP rankings.
-
I would do
User-agent: *
Disallow: /?
Allow: /But test it first in WMT first to be safe. However, you must be sure that this is the route you want to go down. Robots.txt will prevent all of those pages from being indexed, which means that none of their content will count. Any links to these pages may also be devalued. The result is a potential drop in SERPs.
What is the reason why you don't want them appearing? That way we may find an alternative solution.
-
This is basically a duplicate of your other thread where I gave you that code. Yes, it should block the other pages. Put that in, fetch in WMT and you should be right.
Also, you can test it before you implement in WMT as well. I tried it on my end and it works.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
After hack and remediation, thousands of URL's still appearing as 'Valid' in google search console. How to remedy?
I'm working on a site that was hacked in March 2019 and in the process, nearly 900,000 spam links were generated and indexed. After remediation of the hack in April 2019, the spammy URLs began dropping out of the index until last week, when Search Console showed around 8,000 as "Indexed, not submitted in sitemap" but listed as "Valid" in the coverage report and many of them are still hack-related URLs that are listed as being indexed in March 2019, despite the fact that clicking on them leads to a 404. As of this Saturday, the number jumped up to 18,000, but I have no way of finding out using the search console reports why the jump happened or what are the new URLs that were added, the only sort mechanism is last crawled and they don't show up there. How long can I expect it to take for these remaining urls to also be removed from the index? Is there any way to expedite the process? I've submitted a 'new' sitemap several times, which (so far) has not helped. Is there any way to see inside the new GSC view why/how the number of valid URLs in the indexed doubled over one weekend?
Intermediate & Advanced SEO | | rickyporco0 -
How to rank a page on established site quickly
Hi, I'm looking for information about how I can rank an e-commerce category page quickly from a link building perspective. It usually takes me 6-12 months to rank these pages within the top 3 spots with link building, but I would like to get results faster. My site is established for more than 10 years and performs well in Google organic search. Here is what usually works over a 6-12 month time span: 15-40 links within articles on DA 15-60 sites, built within 6-12 months More than 75% of the links are from blogs Variety of anchor text Combination of follow/nofollow Deep links to product pages within the category we're trying to rank Might be important to note that it was easy for us to get category pages listed in DMOZ categories, when it was still around but it didn't seem to play any role in getting ranked faster. Note: We only build links on real sites with real traffic and decent performance metrics. No PBNs or other crap sites. I'd sincerely appreciate it if anyone can make any suggestions or point me towards helpful info. Thanks!
Intermediate & Advanced SEO | | Choice0 -
A single page from site not ranking
Hello, We have a new site launched in March, that is ranking well in search for all of the pages, except one and we don't know why. This page it is optimised exactly the same way like the others, but still doesn't rank in Google. We have verified robots.txt for noffollow, noindex tags, we have verified if it was penalized by Google, but still didn't find nothing. Initially we had another site and was on the topic of this page, but we have redirected it to the new one. In case this old site was anytime in the past penalized by Google, could it be possible that the new page be influenced by this? Also, we have another site that ranks on the first position, that targets the same keywords like the page that does not rank. It was the first site we launched, so it is pretty much old, but we do not have duplicate content on them. Maybe Google doesn't like the fact that both target the same keywords and chooses to display only the old site? Please help us if you have any ideas or have been through such thing. Thank you!
Intermediate & Advanced SEO | | daniela.pirlogea0 -
New page not topping on results
Hi, We have created a new page on our website for same keyword in slug but the page is not showing up for same keyword even combined with website name: website.com/keyword is new page and not listing on top of results for exact search query "website keyword". This page is listing as 3rd result and other pages are making on top even they don't match with page title, h1 tags and URL. This new page is indexed. How long it'll take to Google to adopt this? I don't think it'll remain same forever. Is there anything we can do from our end?
Intermediate & Advanced SEO | | vtmoz0 -
Something happened within the last 2 weeks on our WordPress-hosted site that created "duplicates" by counting www.company.com/example and company.com/example (without the 'www.') as separate pages. Any idea what could have happened, and how to fix it?
Our website is running through WordPress. We've been running Moz for over a month now. Only recently, within the past 2 weeks, have we been alerted to over 100 duplicate pages. It appears something happened that created a duplicate of every single page on our site; "www.company.com/example" and "company.com/example." Again, according to our MOZ, this is a recent issue. I'm almost certain that prior to a couple of weeks ago, there existed both forms of the URL that directed to the same page without be counting as a duplicate. Thanks for you help!
Intermediate & Advanced SEO | | wzimmer0 -
My blog's categories are winning over my landing pages, what to do?
Hi My blogs categories for the ecommerce site are by subject and are similar to the product landing pages. Example Domain.com/laptops that sells laptops Domain.com/blog/laptops that shows news and articles on laptops Within the blog posts the links of anchor laptop are to the store. What to do? Thanks
Intermediate & Advanced SEO | | BeytzNet1 -
Want to merge high ranking niche websites into a new mega site, but don't want to lose authority from old top level pages
I have a few older websites that SERP well, and I am considering merging some or all of them into a new related website that I will be launching regardless. My old websites display real estate listings and not much else. Each website is devoted to showing homes for sale in a specific neighborhood. The domains are all in the form of Neighborhood1CityHomes.com, Neighborhood2CityHomes.com, etc. These sites SERP well for searches like "Neighborhood1 City homes for sale" and also "Neighborhood1 City real estate" where some or all of the query is in the domain name. Google simply points to the top of the domain although each site has a few interior pages that are rarely used. There is next to zero backlinking to the old domains, but each links to the other with anchor text like "Neighborhood1 Cityname real estate". That's pretty much the extent of the link profile. The new website will be a more comprehensive search portal where many neighborhoods and cities can be searched. The domain name is a nonsense word .com not related to actual key words. The structure will be like newdomain.com/cityname/neighborhood-name/ where the neighborhood real estate listings are that would replace the old websites, and I'd 301 the old sites to the appropriate internal directories of the new site. The content on the old websites is all on the home page of each, at least the content for searches that matter to me and rank well, and I read an article suggesting that Google assigns additional authority for top level pages (can I link to that here?). I'd be 301-ing each old domain from a top level to a 3rd level interior page like www. newdomain/cityname/neighborhood1/. The new site is better than the old sites by a wide margin, especially on mobile, but I don't want to lose all my top positions for some tough phrases. I'm not running analytics on the old sites in question, but each of the old sites has extensive past history with AdWords (which I don't run any more). So in theory Google knows these old sites are good quality.
Intermediate & Advanced SEO | | Gogogomez0 -
HTML5 one page website on-site SEO
Hey guys, If for example, I'm faced with a client who has a website similar to: http://www.symphonyonline.co.uk/ How should I proceed with the on-site optimization? Should I create new pages on the website? Should I create a blog for the site to increase my reach? Please give me your tips on how to proceed with this kind of website. Thanks.
Intermediate & Advanced SEO | | BruLee0