Disallow a spammed sub-page from robots.txt
-
Hi,
I have a sub-page on my website with a lot of spam links pointing on it. I was wondering if Google will ignore that spam links on my site if i go and hide this page using the robots.txt
Does that will get me out of Google's randar on that page or its useless?
-
Does it rank for anything worthwhile?
Does it have any legitimate / valueable links pointing to it?
If the answer is no to both of those questions, just delete the page and recreate it at a new URL and request a removal of the old URL from Google's index (and obviously don't 301 redirect it).
-
Hi, my personal opinion is that if they were unintentional or not done by you then Google will ignore these and not penalise site (see Rans Whiteboard Friday video on Negative SEO).
However if it is a page that is not very important to you then maybe you should consider removing this page from Googles index (use GWT for this) and then getting Google to re-index a new page that has no spam links pointing to it?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I optimize my home-page or a sub-page for my most important keyword
Quick question: When choosing the most important keyword set that I would like to rank for, would I be better off optimizing my homepage, or a sub page for this keyword. My thinking goes as follows: The homepage (IE www.mysite.com) naturally has more backlinks and thus a better Google Page Rank. However, there are certain things I could do to a subpage (IE www.mysite.com/green-widgets-los-angeles ) that I wouldn't want to do to the homepage, which might be more "optimal" overall. Option C, I suppose, would be to optimize both the homepage, and a single sub-page, which is seeming like a pretty good solution, but I have been told that having multiple pages optimized for the same keywords might "confuse" search engines. Would love any insight on this!
On-Page Optimization | | Jacob_A2 -
Unique Pages with Thin Content vs. One Page with Lots of Content
Is there anyone who can give me a definitive answer on which of the following situations is preferable from an SEO standpoint for the services section of a website? 1. Many unique and targeted service pages with the primary keyword in the URL, Title tag and H1 - but with the tradeoff of having thin content on the page (i.e. 100 words of content or less). 2. One large service page listing all services in the content. Primary keyword for URL, title tag and H1 would be something like "(company name) services" and each service would be in the H2 title. In this case, there is lots of content on the page. Yes, the ideal situation would be to beef up content for each unique pages, but we have found that this isn't always an option based on the amount of time a client has dedicated to a project.
On-Page Optimization | | RCDesign741 -
Duplicate page content
Hi Crawl errors is showing 2 pages of duplicate content for my clients WordPress site: /news/ & /category/featured/ Yoast is installed so how best to resolve this ? i see that both pages are canonicalised to themselves so presume just need to change the canonical tag on /category/featured/ to reference /news/ ?(since news is the page with higher authority and the main page for showing this info) or is there other way in Yoast or WP to deal with this & prevent from happening again ? Cheers Dan
On-Page Optimization | | Dan-Lawrence0 -
Duplicate Content on Event Pages
My client has a pretty popular service of event listings and, in hope of gathering more events, they opened up the platform to allow users to add events. This works really well for them and they are able to garner a lot more events this way. The major problem I'm finding is that many event coordinators and site owners will take the copy from their website and copy and paste it, duplicating a lot of the content. We have editor picks that contain a lot of unique content but the duplicate content scares me. It hasn't hurt our page ranking (we have a page ranking of 7) but I'm wondering if this is something that we should address. We don't have the manpower to eliminate all the duplication but if we cut down the duplication would we experience a significant advantage over people posting the same event?
On-Page Optimization | | mattdinbrooklyn0 -
Page Rank drop from 4 to ?
Our site (ecommerce) has been around since 1998. The pagerank has gone from a 4 to a ? The Moz score is still good, but traffic is way down. Never got a warning from Google, and were never part of a BH linking scheme? I'm puzzled? There are some duplicate content issues and missing meta tags, but they were on customer login pages that we have since blocked? Thanks
On-Page Optimization | | rglaubinger0 -
On Page reports is empty
Hello, Yesterday I created my PRO account, I have several urls in top 50 instead of a have no report in On Page Reports, how low take the system for generating this? Thank you, Carlos
On-Page Optimization | | cahams0 -
Custom Landing Page URLs
I will begin creating custom landing pages optimized for long-tail keywords. Placing the keywords in the URL is obviously important -- Question: would it be detrimental to rankings to have extra characters extending past the keyword? I'm not able to use tracking code, but need to put an identifier in the URL (clp = custom landing page). For example, is "www.domain.com/silver-fish.html" going to perform meaningfully better than "www.domain.com/silver-fish-clp.html" for the kw phrase "silver fish"? There will obviously be a lot of on-page optimization in addition to just structuring the URLs. Thank you. SIMbiz
On-Page Optimization | | SIMbiz0 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0