Robots.txt 404 problem
-
I've just set up a wordpress site with a hosting company who only allow you to install your wordpress site in http://www.myurl.com/folder as opposed to the root folder. I now have the problem that the robots.txt file only works in http://www.myurl./com/folder/robots.txt
Of course google is looking for it at http://www.myurl.com/robots.txt and returning a 404 error. How can I get around this? Is there a way to tell google in webmaster tools to use a different path to locate it? I'm stumped?
-
Can you give us the name of the hosting company by chance?
-
Can you do anything at all at the root of your domain..ie myurl.com ? This makes no sense. How can you host a domain and have no control over the root of your own domain name..the one you hosted. Don't you have FTP access ? When you connect using ftp, can't you see the server path for the root folder ? You should. I would check via FTP. I have never seen such a scenario.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Google Indexing Duplicate URLs : Ignoring Robots & Canonical Tags
Hi Moz Community, We have the following robots command that should prevent URLs with tracking parameters being indexed. Disallow: /*? We have noticed google has started indexing pages that are using tracking parameters. Example below. http://www.oakfurnitureland.co.uk/furniture/original-rustic-solid-oak-4-drawer-storage-coffee-table/1149.html http://www.oakfurnitureland.co.uk/furniture/original-rustic-solid-oak-4-drawer-storage-coffee-table/1149.html?ec=affee77a60fe4867 These pages are identified as duplicate content yet have the correct canonical tags: https://www.google.co.uk/search?num=100&site=&source=hp&q=site%3Ahttp%3A%2F%2Fwww.oakfurnitureland.co.uk%2Ffurniture%2Foriginal-rustic-solid-oak-4-drawer-storage-coffee-table%2F1149.html&oq=site%3Ahttp%3A%2F%2Fwww.oakfurnitureland.co.uk%2Ffurniture%2Foriginal-rustic-solid-oak-4-drawer-storage-coffee-table%2F1149.html&gs_l=hp.3..0i10j0l9.4201.5461.0.5879.8.8.0.0.0.0.82.376.7.7.0....0...1c.1.58.hp..3.5.268.0.JTW91YEkjh4 With various affiliate feeds available for our site, we effectively have duplicate versions of every page due to the tracking query that Google seems to be willing to index, ignoring both robots rules & canonical tags. Can anyone shed any light onto the situation?
Intermediate & Advanced SEO | | JBGlobalSEO0 -
Problems with Squarespace Title Tags
Hi All, I'm having problems editing the title tags on individual pages on Squarespace. It seems the only way to do it is via the page title name. Here is an example: http://www.autismsees.com/research/. The page is called research, so it makes that the meta title. The problem is I want to keep research on the page and the Meta Title be: Autism Spectrum Research. I'v tried searching over the web, but no luck so far. Thanks for your help.
Intermediate & Advanced SEO | | PeterRota0 -
Robots.txt assistance
I want to block all the inner archive news pages of my website in robots.txt - we don't have R&D capacity to set up rel=next/prev or create a central page that all inner pages would have a canonical back to, so this is the solution. The first page I want indexed reads:
Intermediate & Advanced SEO | | theLotter
http://www.xxxx.news/?p=1 all subsequent pages that I want blocked because they don't contain any new content read:
http://www.xxxx.news/?p=2
http://www.xxxx.news/?p=3
etc.... There are currently 245 inner archived pages and I would like to set it up so that future pages will automatically be blocked since we are always writing new news pieces. Any advice about what code I should use for this? Thanks!0 -
Is it a problem to have too many 301 redirects within your site
my website is translated into 10+ languages, but our news articles are often only published in 1-2 languages. Currently, URLs are created in the unpublished news languages that then 301 redirect the user to main news page since the content doesnt exist in that language. Is this implementation okay or is there a preferred method we should be using so that we don't have a large number of pages on the site with redirects? Thanks!
Intermediate & Advanced SEO | | theLotter0 -
Soft 404
Hey forum, My site is a Price Comparison site. Lately I've been getting some "Soft 404" errors with the Webmaster tool. I'll try to explain the steps causing it: 1. There's a valid link to a product 2. At some point the product is temporary out of stock or unavailable. 3. Google crawls this product page, getting a valid page with a message explaining this product is unavailable at this time. 4. Google see this page for few different products and (I assume) figures it's a none existing page and so it's a soft 404. The possible solutions I see are: 1. Return real 404, I'm not a fan of this solution, because these links will very likely be valid again when the product is back in stock. 2. Live with some "soft 404" errors in the webmaster tool. 3. Find another way to explain to Google that it's not a real 404. This sounds great but I'm not sure how this can be done. Any thoughts which would be the best method? Or maybe another solution I haven't thought of? Thank you.
Intermediate & Advanced SEO | | corwin0 -
Reciprocal Links and nofollow/noindex/robots.txt
Hypothetical Situations: You get a guest post on another blog and it offers a great link back to your website. You want to tell your readers about it, but linking the post will turn that link into a reciprocal link instead of a one way link, which presumably has more value. Should you nofollow your link to the guest post? My intuition here, and the answer that I expect, is that if it's good for users, the link belongs there, and as such there is no trouble with linking to the post. Is this the right way to think about it? Would grey hats agree? You're working for a small local business and you want to explore some reciprocal link opportunities with other companies in your niche using a "links" page you created on your domain. You decide to get sneaky and either noindex your links page, block the links page with robots.txt, or nofollow the links on the page. What is the best practice? My intuition here, and the answer that I expect, is that this would be a sneaky practice, and could lead to bad blood with the people you're exchanging links with. Would these tactics even be effective in turning a reciprocal link into a one-way link if you could overlook the potential immorality of the practice? Would grey hats agree?
Intermediate & Advanced SEO | | AnthonyMangia0 -
Apache not directing to 404?
I have a PHP website that produces an actual page when /index.php/GarbageURL/MoreDirectories/Page.suffix/DirectoryAgain is typed in a browser. Why? How? For what purpose? The content and HTML is produced in the source, but the images and css are broken due to the location of the file, obvi. I don't understand what this default tendency is for.
Intermediate & Advanced SEO | | Bombbomb0