Use of Robots.txt file on a job site
-
We are performing SEO on a large niche Job Board. My question revolves around the thought of no following all the actual job postings from their clients as they only last for 30 to 60 days. Anybody have any idea on the best way to handle this?
-
Happy to help!
-
Thanks Jennifer! Great answer - I wasn't sure if which strategy would be better. Your answer makes a lot of sense. Thanks for your input!
-
Hi Oliver!
Before coming to SEOmoz I used to work for OntargetJobs which is a company that has multiple niche job boards. Here's what I would recommend:
- Keep those pages followed because people will link to them and you want to preserve as much of the link equity as you possibly can. So how do you do that?
- Make sure that when a job expires (or gets removed, whatever) that the page gets 301 redirected to the category page the job is posted under. Depending on the niche, it may be locale based, in that case redirect it to the location. The idea here is to send the user to a helpful page for good user experience and conserve some link equity at the same time.
- On the page that gets redirected to, program it so when a redirection happens that it displays a message at the top of the page. Something along the lines of "Oops! The job you were looking for is no longer active. However here are similar jobs in XYZ category"
Again as I mentioned above, this is a good way to help user experience, plus keep some of that link equity from the inevitable links job posting pages get.
I hope this helps!
Jen
-
I do not know if I understand correctly
Do you want insert no following to all the job posting that expire in 60 days?
If it 's so, you can put a control in the cms for the date of expiry of the job postingIf somebody click on the offer expired by SERP, you can retrieve a little script with a 301 redirect to the job posting similar category to the expired.Ciao
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt - "File does not appear to be valid"
Good afternoon Mozzers! I've got a weird problem with one of the sites I'm dealing with. For some reason, one of the developers changed the robots.txt file to disavow every site on the page - not a wise move! To rectify this, we uploaded the new robots.txt file to the domain's root as per Webmaster Tool's instructions. The live file is: User-agent: * (http://www.savistobathrooms.co.uk/robots.txt) I've submitted the new file in Webmaster Tools and it's pulling it through correctly in the editor. However, Webmaster Tools is not happy with it, for some reason. I've attached an image of the error. Does anyone have any ideas? I'm managing another site with the exact same robots.txt file and there are no issues. Cheers, Lewis FNcK2YQ
Technical SEO | | PeaSoupDigital0 -
Site structure headache
Hello all, I'm struggling to get to grips with a websites site structure. I appreciate that quality content is key etc, and the more content the better, but then I have issues with regards to doorway pages. For example im now starting to develop a lot of ecommerce websites and want to promote this service. should we have pages that detail all of the ins and outs of ecommerce - or should we simplify it to a couple of pages. what is best practice? Also isn't a content hub similar to having doorway pages? let me know what you think! William
Technical SEO | | wseabrook0 -
How many times robots.txt gets visited by crawlers, especially Google?
Hi, Do you know if there's any way to track how often robots.txt file has been crawled? I know we can check when is the latest downloaded from webmaster tool, but I actually want to know if they download every time crawlers visit any page on the site (e.g. hundreds of thousands of times every day), or less. thanks...
Technical SEO | | linklater0 -
Search engines have been blocked by robots.txt., how do I find and fix it?
My client site royaloakshomesfl.com is coming up in my dashboard as having Search engines have been blocked by robots.txt, only I have no idea where to find it and fix the problem. Please help! I do have access to webmaster tools and this site is a WP site, if that helps.
Technical SEO | | LeslieVS0 -
Robots.txt and canonical tag
In the SEOmoz post - http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts, it's being said - If you have a robots.txt disallow in place for a page, the canonical tag will never be seen. Does it so happen that if a page is disallowed by robots.txt, spiders DO NOT read the html code ?
Technical SEO | | seoug_20050 -
Using robots.txt to deal with duplicate content
I have 2 sites with duplicate content issues. One is a wordpress blog. The other is a store (Pinnacle Cart). I cannot edit the canonical tag on either site. In this case, should I use robots.txt to eliminate the duplicate content?
Technical SEO | | bhsiao0 -
Is robots.txt a must-have for 150 page well-structured site?
By looking in my logs I see dozens of 404 errors each day from different bots trying to load robots.txt. I have a small site (150 pages) with clean navigation that allows the bots to index the whole site (which they are doing). There are no secret areas I don't want the bots to find (the secret areas are behind a Login so the bots won't see them). I have used rel=nofollow for internal links that point to my Login page. Is there any reason to include a generic robots.txt file that contains "user-agent: *"? I have a minor reason: to stop getting 404 errors and clean up my error logs so I can find other issues that may exist. But I'm wondering if not having a robots.txt file is the same as some default blank file (or 1-line file giving all bots all access)?
Technical SEO | | scanlin0