There are no links to these pages, so no juice. There are also no 'new' replacement pages. We just want them out of the index ASAP by any means necessary.
- Home
- kurus
Latest posts made by kurus
-
RE: Removing URLs in bulk when directory exclusion isn't an option?
-
RE: Removing URLs in bulk when directory exclusion isn't an option?
Thanks, but the goal is to expedite the removal process via the URL removal tool. We've already 404'd the pages, so they'll be removed from the index. It's a question of timing, since the pages in question are low quality and hurting us in the context of Panda.
-
RE: Googlebot HTTP 204 Status Code Handling?
Thanks for your input.
-
Removing URLs in bulk when directory exclusion isn't an option?
I had a bunch of URLs on my site that followed the form:
http://www.example.com/abcdefg?q=&site_id=0000000048zfkf&l=
There were several million pages, each associated with a different site_id. They weren't very useful, so we've removed them entirely and now return a 404.The problem is, they're still stuck in Google's index. I'd like to remove them manually, but how? There's no proper directory (i.e. /abcdefg/) to remove, since there's no trailing /, and removing them one by one isn't an option. Is there any other way to approach the problem or specify URLs in bulk?
Any insights are much appreciated.
Kurus
-
Googlebot HTTP 204 Status Code Handling?
If a user runs a search that returns no results, and the server returns a 204 (No Content), will Googlebot treat that as the rough equivalent of a 404 or a noindex? If not, then it seems one would want to noindex the page to avoid low quality penalties, but that might require more back and forth with the server, which isn't ideal.
Kurus
-
RE: Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
Baptiste,
Thanks for the feedback. Can you clarify what you mean by the following?
"On a side note, if you were impacted by Panda, I would strongly suggest to remove / disallow the empty pages on your site. This will give you more crawl budget for interesting content."
-
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages.
The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us...
http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutionsSpecifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'...
Thoughts?
Kurus
-
RE: Did Google's Farmer Update Positively/Negatively Affect Your Search Traffic?
Has anyone seen a recovery after taking remedial action?
Best posts made by kurus
-
Removing URLs in bulk when directory exclusion isn't an option?
I had a bunch of URLs on my site that followed the form:
http://www.example.com/abcdefg?q=&site_id=0000000048zfkf&l=
There were several million pages, each associated with a different site_id. They weren't very useful, so we've removed them entirely and now return a 404.The problem is, they're still stuck in Google's index. I'd like to remove them manually, but how? There's no proper directory (i.e. /abcdefg/) to remove, since there's no trailing /, and removing them one by one isn't an option. Is there any other way to approach the problem or specify URLs in bulk?
Any insights are much appreciated.
Kurus
Looks like your connection to Moz was lost, please wait while we try to reconnect.