Robots.txt 404 problem
-
I've just set up a wordpress site with a hosting company who only allow you to install your wordpress site in http://www.myurl.com/folder as opposed to the root folder. I now have the problem that the robots.txt file only works in http://www.myurl./com/folder/robots.txt
Of course google is looking for it at http://www.myurl.com/robots.txt and returning a 404 error. How can I get around this? Is there a way to tell google in webmaster tools to use a different path to locate it? I'm stumped?
-
Can you give us the name of the hosting company by chance?
-
Can you do anything at all at the root of your domain..ie myurl.com ? This makes no sense. How can you host a domain and have no control over the root of your own domain name..the one you hosted. Don't you have FTP access ? When you connect using ftp, can't you see the server path for the root folder ? You should. I would check via FTP. I have never seen such a scenario.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirection Problem
I have a site that has 2,50,000 pages and I want to redirect to another domain. Is it good practice for SEO and google?
Intermediate & Advanced SEO | | MuhammadQasimAttari0 -
Problems with US site being prioritized in Google UK
Our US version (.com) of our site is appearing above the UK version (co.uk) when using Google UK. I know Google has been giving US more priority in the UK market over the last couple years... What is protocol for fixing/dealing with this? Also, and probably more importantly, how do we handle users who are looking for the UK site right now? Majority of our users are coming from the US so we don't want to cause them any inconvenience, but the UK users need an easy way to get to the UK version quickly. Input is much appreciated!
Intermediate & Advanced SEO | | chrisvogel0 -
Robots.txt assistance
I want to block all the inner archive news pages of my website in robots.txt - we don't have R&D capacity to set up rel=next/prev or create a central page that all inner pages would have a canonical back to, so this is the solution. The first page I want indexed reads:
Intermediate & Advanced SEO | | theLotter
http://www.xxxx.news/?p=1 all subsequent pages that I want blocked because they don't contain any new content read:
http://www.xxxx.news/?p=2
http://www.xxxx.news/?p=3
etc.... There are currently 245 inner archived pages and I would like to set it up so that future pages will automatically be blocked since we are always writing new news pieces. Any advice about what code I should use for this? Thanks!0 -
HTTP Status Bad Request - 404, but also, add a 400 HTTP Status in certain circumstances?
We currently have a custom 404 page set up for our clients, but the developer has it returning a HTTP 200 for the status code. Big no, no. I'm having that fixed right now. My question is, currently, the custom 404 page is only returned for urls with the extension .aspx: For example : ilovepizza.com/pepperni.aspx would return a 404 page because the correct page is ilovepizza.com/pepperoni.aspx Any other format of URL without the extension (example ilovepizza.com/thumbtack) does not trigger the custom 404 page we've created, but it does trigger a server error with a 404 HTTP status page. I want to change this so this type of error also triggers the custom 404 page because it's more user-friendly and would return them to the website. My question: Is there any benefit to making the /thumbtack errors return the custom 404 page but with a 400 Bad Request HTTP Status? Kind of a novice here in those aspects, but does the 400 Bad Request status indicate that it was a user mistake and not a mistake created on the website? Other suggestions?
Intermediate & Advanced SEO | | EEE30 -
Do I need a canonical tag on the 404 error page?
Per definition, a 404 is displayed for different url (any not existing url ...). As I try to clean my website following SEOmoz pro advices, SEOmoz notify me of duplicate content on urls leading to a 404 🙂 This is I guess not that important, but just curious: should we add a cononical tag to the template returning the 404, with a canonical url such as www.mysite.com/404 ?
Intermediate & Advanced SEO | | nuxeo0 -
Hundreds of thousands of 404's on expired listings - issue.
Hey guys, We have a conundrum, with a large E-Commerce site we operate. Classified listings older than 45 days are throwing up 404's - hundreds of thousands, maybe millions. Note that Webmaster Tools peaks at 100,000. Many of these listings receive links. Classified listings that are less than 45 days show other possible products to buy based on an algorithm. It is not possible for Google to crawl expired listings pages from within our site. They are indexed because they were crawled before they expired, which means that many of them show in search results. -> My thought at this stage, for usability reasons, is to replace the 404's with content - other product suggestions, and add a meta noindex in order to help our crawl equity, and get the pages we really want to be indexed prioritised. -> Another consideration is to 301 from each expired listing to the category heirarchy to pass possible link juice. But we feel that as many of these listings are findable in Google, it is not a great user experience. -> Or, shall we just leave them as 404's? : google sort of says it's ok Very curious on your opinions, and how you would handle this. Cheers, Croozie. P.S I have read other Q & A's regarding this, but given our large volumes and situation, thought it was worth asking as I'm not satisfied that solutions offered would match our needs.
Intermediate & Advanced SEO | | sichristie0 -
Think I may have found a problem with site. Can you confirm my suspicions?
So I've been wracking my brain about a problem. I had posted earlier about our degrading rank that we haven't been able to arrest. I thought we were doing everything right. Many years ago we had a program that would allow other stores in our niche use our site as a storefront if they couldn't deal with setting up their own site. They would have their own homepage with their own domain but all links from that page would go to our site to avoid duplicate content issues (before I knew about canonical meta tags or before they existed, I don't remember). I just realize that we had dozens of these domains pointing to our site without nofollow meta tags. Is it possible that this pattern looked like we were trying to game Google and have been penalized as some kind of link farm since Panda? I've added nofollow meta tags to these domains. If we were being penalized for this, should this fix the problem?
Intermediate & Advanced SEO | | IanTheScot0 -
Should I robots block site directories with primarily duplicate content?
Our site, CareerBliss.com, primarily offers unique content in the form of company reviews and exclusive salary information. As a means of driving revenue, we also have a lot of job listings in ouir /jobs/ directory, as well as educational resources (/career-tools/education/) in our. The bulk of this information are feeds, which exist on other websites (duplicate). Does it make sense to go ahead and robots block these portions of our site? My thinking is in doing so, it will help reallocate our site authority helping the /salary/ and /company-reviews/ pages rank higher, and this is where most of the people are finding our site via search anyways. ie. http://www.careerbliss.com/jobs/cisco-systems-jobs-812156/ http://www.careerbliss.com/jobs/jobs-near-you/?l=irvine%2c+ca&landing=true http://www.careerbliss.com/career-tools/education/education-teaching-category-5/
Intermediate & Advanced SEO | | CareerBliss0