Use of Robots.txt file on a job site
-
We are performing SEO on a large niche Job Board. My question revolves around the thought of no following all the actual job postings from their clients as they only last for 30 to 60 days. Anybody have any idea on the best way to handle this?
-
Happy to help!
-
Thanks Jennifer! Great answer - I wasn't sure if which strategy would be better. Your answer makes a lot of sense. Thanks for your input!
-
Hi Oliver!
Before coming to SEOmoz I used to work for OntargetJobs which is a company that has multiple niche job boards. Here's what I would recommend:
- Keep those pages followed because people will link to them and you want to preserve as much of the link equity as you possibly can. So how do you do that?
- Make sure that when a job expires (or gets removed, whatever) that the page gets 301 redirected to the category page the job is posted under. Depending on the niche, it may be locale based, in that case redirect it to the location. The idea here is to send the user to a helpful page for good user experience and conserve some link equity at the same time.
- On the page that gets redirected to, program it so when a redirection happens that it displays a message at the top of the page. Something along the lines of "Oops! The job you were looking for is no longer active. However here are similar jobs in XYZ category"
Again as I mentioned above, this is a good way to help user experience, plus keep some of that link equity from the inevitable links job posting pages get.
I hope this helps!
Jen
-
I do not know if I understand correctly
Do you want insert no following to all the job posting that expire in 60 days?
If it 's so, you can put a control in the cms for the date of expiry of the job postingIf somebody click on the offer expired by SERP, you can retrieve a little script with a 301 redirect to the job posting similar category to the expired.Ciao
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt on subdomains
Hi guys! I keep reading conflicting information on this and it's left me a little unsure. Am I right in thinking that a website with a subdomain of shop.sitetitle.com will share the same robots.txt file as the root domain?
Technical SEO | | Whittie0 -
Site Migration Questions
Hello everyone, We are in the process of going from a .net to a .com and we have also done a complete site redesign as well as refreshed all of our content. I know it is generally ideal to not do all of this at once but I have no control over that part. I have a few questions and would like any input on avoiding losing rankings and traffic. One of my first concerns is that we have done away with some of our higher ranking pages and combined them into one parallax scrolling page. Basically, instead of having a product page for each product they are now all on one page. This of course has made some difficulty because search terms we were using for the individual pages no longer apply. My next concern is that we are adding keywords to the ends of our urls in attempt to raise rankings. So an example: website.com/product/product-name/keywords-for-product if a customer deletes keywords-for-product they end up being re-directed back to the page again. Since the keywords cannot be removed is a redirect the best way to handle this? Would a canonical tag be better? I'm trying to avoid duplicate content since my request to remove the keywords in urls was denied. Also when a customer deletes everything but website.com/product/ it goes to the home page and the url turns to website.com/product/#. Will those pages with # at the end be indexed separately or does google ignore that? Lastly, how can I determine what kind of loss in traffic we are looking at upon launch? I know some is to be expected but I want to avoid it as much as I can so any advice for this migration would be greatly appreciated.
Technical SEO | | Sika220 -
Linking shallow sites to flagship sites
We have hundreds of domains that we are either doing nothing with, or they are very shallow. We do not have the time to build enough quality content on them since they are ancillary to our flagship sites that are already in need of attention and good content. My question is...should we redirect them to the flagship site? If yes, is it ok to do this from root domain to root domain or should we link the root domain to a matching/similar page (gymfranchises.com to http://www.franchisesolutions.com/health_services_franchise_opportunities.cfm)? Or should we do something different altogether? Since we have many to redirect (if this is the route we go), should we redirect gradually?
Technical SEO | | franchisesolutions0 -
Should search pages be disallowed in robots.txt?
The SEOmoz crawler picks up "search" pages on a site as having duplicate page titles, which of course they do. Does that mean I should put a "Disallow: /search" tag in my robots.txt? When I put the URL's into Google, they aren't coming up in any SERPS, so I would assume everything's ok. I try to abide by the SEOmoz crawl errors as much as possible, that's why I'm asking. Any thoughts would be helpful. Thanks!
Technical SEO | | MichaelWeisbaum0 -
Ensuring Assets (PDFs, PowerPoint Files, Word Docs, etc.) are Indexable on Site
Hi there - I'm working on an educational site in which users will be able to search our repository of PDF articles, PowerPoint files, and so on through an on-site search engine. What is the best way to ensure each of these documents/assets are indexable by Google since they technically don't reside on an HTML page....they are just pulled up if the user searches for them? The site itself is just a few pages, but the files, articles, and videos in the repository are in the hundreds. Should I just name and tag them properly and make sure they're all included in an XML site map? Anything else suggested? Thanks very much!
Technical SEO | | MedThinkCommunications0 -
Robots.txt and 301
Hi Mozzers, Can you answer something for me please. I have a client and they have 301 re-directed the homepage '/' to '/home.aspx'. Therefore all or most of the linkjuice is being passed which is great. They have also marked the '/' as nofollow / noindex in the Robots.txt file so its not being crawled. My question is if the '/' is being denied access to the robots is it still passing on the authority for the links that go into this page? It is a 301 and not 302 so it would work under normal circumstances but as the page is not being crawled do I need to change the Robots.txt to crawl the '/'? Thanks Bush
Technical SEO | | Bush_JSM0 -
Need Help With Robots.txt on Magento eCommerce Site
Hello, I am having difficulty getting my robots.txt file to be configured properly. I am getting error emails from Google products stating they can't view our products because they are being blocked, and this past week, in my SEO dashboard, the URL's receiving search traffic dropped by almost 40%. Is there anyone that can offer assistance on a good template robots.txt file I can use for a Magento eCommerce website? The one I am currently using was found at this site here: e-commercewebdesign.co.uk/blog/magento-seo/magento-robots-txt-seo.php - However, I am getting problems from Google now because of it. I searched and found this thread here: http://www.magentocommerce.com/wiki/multi-store_set_up/multiple_website_setup_with_different_document_roots#the_root_folder_robots.txt_file - But I felt like maybe I should get some additional help on properly configuring a robots for a Magento site. Thanks in advance for any help. Please, let me know if you need more info to provide assistance.
Technical SEO | | JerDoggMckoy0