IIS 7.5 - Duplicate Content and Totally Wrong robot.txt
-
Well here goes! My very first post to SEOmoz.
I have two clients that are hosted by the same hosting company. Both sites have major duplicate content issues and appear to have no internal links. I have checked this both here with our awesome SEOmoz Tools and with the IIS SEO Tool Kit.
After much waiting I have heard back from the hosting company and they say that they have "implemented redirects in IIS7.5 to avoid duplicate content" based on the following article: http://blog.whitesites.com/How-to-setup-301-Redirects-in-IIS-7-for-good-SEO__634569104292703828_blog.htm.
In my mind this article covers things better: www.seomoz.org/blog/what-every-seo-should-know-about-iis. What do you guys think?
Next issue, both clients (as well as other sites hosted by this company) have a robot.txt file that is not their own. It appears that they have taken one client's robot.txt file and used it as a template for other client sites. I could be wrong but I believe this is causing the internal links to not be indexed. There is also a site map, again not for each client, but rather for the client that the original robot.txt file was created for. Again any input on this would be great. I have asked that the files just be deleted but that has not occurred yet.
Sorry for the messy post...I'm at the hospital waiting to pick up my bro and could be called to get him any minute.
Thanks so much,
Tiff
-
yeah i think to answer we'd definitely need to see the robots content.
have you determined those pages are truly not indexed at all? like, have you googled specific sentences from those pages and returned zero results?
if not, do so and make a list of all the pages that aren't being crawled by Google and compare that to your robots.txt...
Or just make a new robots.txt file for each domain. Shouldn't take more than a minute or two
-
Hey Tiff,
I don't have a complete answer for you cos I am doing some insomnia q&a (it's 4.30am in Oz) but I really hope you are using robots.txt over robot.txt? This may be the reason is not working.
Can you tell us what the contents of the robot/s.txt is?
Dan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do mobile and desktop sites that pull content from the same source count as duplicate content?
We are about to launch a mobile site that pulls content from the same CMS, including metadata. They both have different top-level domains, however (www.abcd.com and www.m.abcd.com). How will this affect us in terms of search engine ranking?
Technical SEO | | ovenbird0 -
Duplicate Content Problem!
Hi folks, I have a quite awkward problem. Since a few weeks a get a huge amount of "duplicate content errors" in my MOZ crawl reports. After a while of looking for the error I thought of the domains I've bought additionally. So I went to Google and typed in site:myotherdomains.com The results was as I expected that my original website got indexed with my new domains aswell. That means: For example my original website was index with www.domain.com/aboutus - Then I bought some additional domains which are pointing on my / folder. What happened is that I also get listed with: www.mynewdomains.com/com How can I fix that? I tried a normal domain redirect but it seems as this doesn't help as when I am visiting www.mynewdomains.com the domain doesnt change in my browser to www.myoriginaldomain.com but stays with it ... I was busy the whole day to find a solution but I am kinda desperate now. If somebody could give me advice it would be much appreciated. Mike
Technical SEO | | KillAccountPlease0 -
Duplicate content in Magento
Hi all We got some serious issues with duplicate content on a Magento site that we are marketing. For example: http://www.citcop.se/varmepumpar-luft-luft/panasonic/panasonic-nordic-ce9nke-5-0kw http://www.citcop.se/panasonic/panasonic-nordic-ce9nke-5-0kw http://www.citcop.se/panasonic-nordic-ce9nke-5-0kw All of the above seem to work just fine as it is now but since they are excatly the same product they should ofcourse do a 301 redirect to the main page. Any ideas on how to sort this out in Magnto without having to resort to manual work in .htaccess? Have a great day Fredrik
Technical SEO | | Resultify0 -
Allow or Disallow First in Robots.txt
If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page
Technical SEO | | irvingw0 -
Tags and Duplicate Content
Just wondering - for a lot of our sites we use tags as a way of re-grouping articles / news / blogs so all of the info on say 'government grants' can be found on one page. These /tag pages often come up with duplicate content errors, is it a big issue, how can we minimnise that?
Technical SEO | | salemtas0 -
I am Posting an article on my site and another site has asked to use the same article - Is this a duplicate content issue with google if i am the creator of the content and will it penalize our sites - or one more than the other??
I operate an ecommerce site for outdoor gear and was invited to guest post on a popular blog (not my site) for a trip i had been on. I wrote the aritcle for them and i also will post this same article on my website. Is this a dup content problem with google? and or the other site? Any Help. Also if i wanted to post this same article to 1 or 2 other blogs as long as they link back to me as the author of the article
Technical SEO | | isle_surf0 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0 -
Robots.txt and robots meta
I have an odd situation. I have a CMS that has a global robots.txt which has the generic User-Agent: *
Technical SEO | | Highland
Allow: / I also have one CMS site that needs to not be indexed ever. I've read in various pages (like http://www.jesterwebster.com/robots-txt-vs-meta-tag-which-has-precedence/22 ) that robots.txt always wins over meta, but I have also read that robots.txt indicates spiderability whereas meta can control indexation. I just want the site to not be indexed. Can I leave the robots.txt as is and still put NOINDEX in the robots meta?0