Which is the better solution to remove URLs from search?meta noindex tag or 410 http header status?
-
Thanks in advance!
-
Do three things
a) add the meta noindex
b) add the search section you don't want crawled to your robots.txt file
c) request url removal in webmaster tools
doing the first two steps will ensure that the pages remain unindexed after you remove them in WMT
-
Thanks Pixelbypixel!
-
depends on how quick you want it to be remove, no index is normally the best, you can also ask Google to remove it from their index if its a more more urgent. - https://support.google.com/websearch/troubleshooter/3111061?rd=1
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
After hack and remediation, thousands of URL's still appearing as 'Valid' in google search console. How to remedy?
I'm working on a site that was hacked in March 2019 and in the process, nearly 900,000 spam links were generated and indexed. After remediation of the hack in April 2019, the spammy URLs began dropping out of the index until last week, when Search Console showed around 8,000 as "Indexed, not submitted in sitemap" but listed as "Valid" in the coverage report and many of them are still hack-related URLs that are listed as being indexed in March 2019, despite the fact that clicking on them leads to a 404. As of this Saturday, the number jumped up to 18,000, but I have no way of finding out using the search console reports why the jump happened or what are the new URLs that were added, the only sort mechanism is last crawled and they don't show up there. How long can I expect it to take for these remaining urls to also be removed from the index? Is there any way to expedite the process? I've submitted a 'new' sitemap several times, which (so far) has not helped. Is there any way to see inside the new GSC view why/how the number of valid URLs in the indexed doubled over one weekend?
Intermediate & Advanced SEO | | rickyporco0 -
In Search Console, why is the XML sitemap "issue" count 5x higher than the URL submission count?
Google Search Console is telling us that there are 5,193 sitemap "issues" - URLs that are present on the XML sitemap that are blocked by robots.txt However, there are only 1,222 total URLs submitted on the XML sitemap. I only found 83 instances of URLs that fit their example description. Why is the number of "issues" so high? Does it compound over time as Google re-crawls the sitemap?
Intermediate & Advanced SEO | | FPD_NYC0 -
Page URL keywords
Hello everybody, I've read that it's important to put your keywords at the front of your page title, meta tag etc, but my question is about the page url. Say my target keywords are exotic, soap, natural, and organic. Will placing the keywords further behind the URL address affect the SEO ranking? If that's the case what's the first n number of words Google considers? For example, www.splendidshop.com/gift-set-organic-soap vs www.splendidshop.com/organic-soap-gift-set Will the first be any less effective than the second one simply because the keywords are placed behind?
Intermediate & Advanced SEO | | ReferralCandy0 -
To index search results or not?
In its webmaster guidelines, Google says not to index search results " that don't add much value for users coming from search engines." I've noticed several big brands index search results, and am wondering if it is generally OK to index search results with high engagement metrics (high PVPV, time on site, etc). We have an database of content, and it seems one of the best ways to get this content in search engines would be to allow indexing of search results (to capture the long tail) rather than build thousands of static URLs. Have any smaller brands had success with allowing indexing of search results? Any best practices or recommendations?
Intermediate & Advanced SEO | | nicole.healthline0 -
Best practice for removing indexed internal search pages from Google?
Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.
Intermediate & Advanced SEO | | HrThomsen0 -
Canonical Meta Tag Best Practices
I've noticed that some website owners use canonical tags even when there may be no duplicate issues.For examplewww.examplesite.com has a canonical tag.......rel="canonical" href="http://www.examplesite.com/" />www.examplesite.com/bluewidget has a canonical tag.......rel="canonical" href="http://www.examplesite.com/bluewidget/" />Is this recommended or helpful to do this?
Intermediate & Advanced SEO | | webestate0 -
How to Hide Directories in Search?
I noticed bad 404 error links in Google Webmaster Tools and they were pointing to directories that do not have an actual page, but hold information. Ex: there are links pointing to our PDF folder which holds all of our pdf documents. If i type in , example.com/pdf/ it brings up a unformated webpage that displays all of our PDF links. How do I prevent this from happening. Right now I am blocking these in my robots.txt file, but if i type them in, they still appear. Or should I not worry about this?
Intermediate & Advanced SEO | | hfranz0 -
Can I add NOFOLLOW or NOINDEX attribute for better organic ranking?
I am working on online retail store which is highly dedicated to Patio Umbrellas. My website is on 2nd page of Google web search for Patio Umbrellas keyword. I have one another internal page with Patio Umbrellas text link. http://www.vistapatioumbrellas.com/21/patio-umbrellas.html I assume that, Google have confusion to give rank for my keyword during Patio Umbrellas keyword. I want to set NOFOLLOW attribute or NOINDEX FOLLOW meta for this page. Will it help me to rank high for Patio Umbrellas keyword. My ultimate goal is to reduce confusion for Patio Umbrellas keyword.
Intermediate & Advanced SEO | | CommercePundit0