A few misc Webmaster tools questions & Robots.txt etc
-
Hi
I have a few general misc questions re Robots.tx & GWT:
1) In the Robots.txt file what do the below lines block, internal search ?
Disallow: /?
Disallow: /*?2) Also the sites feeds are blocked in robots.txt, why would you want to block a sites feeds ?
**3) **What's the best way to deal with the below:
- old removed page thats returning a 500 response code ?
- a soft 404 for an old removed page that has no current replacement
- old removed pages returning a 404
The old pages didn't have any authority or inbound links hence is it best/ok to simply create a url removal request in GWT ?
Cheers
Dan
-
Many Thanks Stufroguk !!
-
-
It depends if Google had index these 'empty' pages. You need to check. Remember that every page is also give page authority. Best to redirect them before removing them as best practice. You can get Google to fetch the pages in GWTs so that the crawlers follow the redirect. Then remove them.
-
Your old pages - fetch them in GWT's, then remove them if you already have the 301's set up. Once google has indexed the new pages, you know the link juice has passed and can remove.
The blocking is used as a back up.
-
-
Thanks Stufroguk,
1) does this still apply if the pages had no content - they were just overview pages/folders without any copy, links or authority hence why i think its ok to just remove urls without 301'ing ?
2) i do have other old content pages that i have 301'd to new replacement but hadnt planned to do anything else with them, but your saying after 2 weeks should nofollow or block them ? wont that stop the link equity passing ?
Cheers
Dan
-
To manage old pages it's best practice to simply 301 redirect them, leave them for a couple of weeks then tag them with no follow and/or block them with robots. That way you've passed on the link equity. Then you can remove them from GWT's.
In answer to 1. yes But not all SE's read the "*" wildcard in file names. You might need to tinker with this a bit.
Use this to help:http://tool.motoricerca.info/robots-checker.phtml
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallow wildcard match in Robots.txt
This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
Technical SEO | | AmandaBridge
Disallow: /?mobile=1 Thank you0 -
301 redirect question
Hi Everyone When doing 301 redirects for a large site, if a page has 0 inbound links would you still redirect it or just leave it? Im just curious on the best practice for this Thanks in advance
Technical SEO | | TheZenAgency0 -
Dealing with 410 Errors in Google Webmaster Tools
Hey there! (Background) We are doing a content audit on a site with 1,000s of articles, some going back to the early 2000s. There is some content that was duplicated from other sites, does not have any external links to it and gets little or no traffic. As we weed these out we set them to 410 to let the Goog know that this is not an error, we are getting rid of them on purpose and so the Goog should too. As expected, we now see the 410 errors in the Crawl report in Google Webmaster Tools. (Question) I have been going through and "Marking as Fixed" in GWT to clear out my console of these pages, but I am wondering if it would be better to just ignore them and let them clear out of GWT on their own. They are "fixed" in the 410 way as I intended and I am betting Google means fixed as being they show a 200 (if that makes sense). Any opinions on the best way to handle this? Thx!
Technical SEO | | CleverPhD0 -
Robots.txt crawling URL's we dont want it to
Hello We run a number of websites and underneath them we have testing websites (sub-domains), on those sites we have robots.txt disallowing everything. When I logged into MOZ this morning I could see the MOZ spider had crawled our test sites even though we have said not to. Does anyone have an ideas how we can stop this happening?
Technical SEO | | ShearingsGroup0 -
Sitemap Generator Tool
We have developed a very large domain with well over 500 pages that need to be indexed. The tool we usually use to create a sitemap has a limit of 500 pages. Does anyone know of good tool we can use to create a sitemap text and xml that doesn't have a limit of pages? Thanks!
Technical SEO | | TracSoft0 -
An Easy Question - Backlinks
Hi guys, I know this is an easy question and I'm already quite sure of the answer for it but it would be good to get some other views. This website - http://www.collapso.net/ have 261,923 backlinks to our website according to Ahrefs. They have 1000's of pages like this - http://www.collapso.net/countiesnew/Cork.html which link to our site. 43.95% of the backlinks to our site are from these guys but we've been fortunate enough to never receive any warnings via WMT or ever experienced drop offs in traffic. My question is - Do we have this site remove all the links to our site or leave them alone? Given there's such a large quantity of links, I'm not exactly sure what the impact would be on us. My instinct says get rid of them. Although part of me questions what such a massive drop in our link profile would look like to Google.
Technical SEO | | MarkScully0 -
Strange Top URLs for Keywords in Google Webmaster Tools
When we click on one of our keywords under the keywords section of Google Webmaster Tools it shows our top URLs for that keyword. The problem is that it is giving us some very strange URLs that we have searched high and low to try to find but we don't know where they came from. Here is a screenshot: http://bit.ly/pl6mB3 Do you know where this type of URL string could have originated and how to fix it?
Technical SEO | | Hakkasan0 -
How do I use the Robots.txt "disallow" command properly for folders I don't want indexed?
Today's sitemap webinar made me think about the disallow feature, seems opposite of sitemaps, but it also seems both are kind of ignored in varying ways by the engines. I don't need help semantically, I got that part. I just can't seem to find a contemporary answer about what should be blocked using the robots.txt file. For example, I have folders containing site comps for clients that I really don't want showing up in the SERPS. Is it better to not have these folders on the domain at all? There are also security issues I've heard of that make sense, simply look at a site's robots file to see what they are hiding. It makes it easier to hunt for files when they know the directory the files are contained in. Do I concern myself with this? Another example is a folder I have for my xml sitemap generator. I imagine google isn't going to try to index this or count it as content, so do I need to add folders like this to the disallow list?
Technical SEO | | SpringMountain0