Restricted by robots.txt and soft bounce issues (related).
-
In our web master tools we have 35K (ish) URLs that are restricted by robots.txt and as have 1200(ish) soft 404s. WE can't seem to figure out how to properly resolve these URLs so that they no longer show up this way. Our traffic from SEO has taken a major hit over the last 2 weeks because of this.
Any help?
Thanks, Libby
-
**These are duplicate URLs that we can't figure out how they are getting created. **
I want to be sure we are talking about the same thing here. When I hear "duplicate URL" I am thinking of multiple URLs which point to the same web page. Depending on how your site is set up it is possible to have many different URLs point to the same web page. Possible examples are:
www.mydomain.com/tennis-rackets
www.mydomain.com/tennis-rackets/
mydomain.com/tennis-rackets?sort=asc
Above are three examples of URLs which can all lead to the same page. You can have dozens of URLs all lead to a page with identical content. How these issues get resolved depends upon how they were created.
The best tool to help you figure this out is your crawl report. Use the SEOmoz crawl tool, then examine the crawl report. It can be a bit overwhelming at first, but you can narrow things down real fast if you use Excel.
Select the header row for your data (begins with the URL field), then select Data > Filter > Auto Filter from the menu. Then start by looking at fields such as "Duplicate Page Content", "URLs with duplicate content", etc. Simply choose YES in the drop down menu to filter for that particular data. This will help you uncover the source of these issues.
The URLs in my example should all be 301'd or canonicalized to the primary page to resolve the duplication issue.
-
Well, part of the problem is these are duplicate URLs that we can't figure out how they are getting created. They were supposed to resolve to our 404 page... Should we remove them all?
-
Hi Libby.
How do you intend to resolve these URLs? Ideally you would remove your robots.txt entries and restrict the pages with meta tags such as "noindex follow" or whatever is appropriate. Any links to 404 pages should be updated or removed.
What further direction do you seek?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt allows wp-admin/admin-ajax.php
Hello, Mozzers!
Technical SEO | | AndyKubrin
I noticed something peculiar in the robots.txt used by one of my clients: Allow: /wp-admin/admin-ajax.php What would be the purpose of allowing a search engine to crawl this file?
Is it OK? Should I do something about it?
Everything else on /wp-admin/ is disallowed.
Thanks in advance for your help.
-AK:2 -
Magento Category Suffix - redirect issue
Hi All, Just launched a new Magento store & set the category suffix to blank & not .html or / So the desired url is https://www.example.com/category-1 But I am seeing a 301 redirect being implemented: https://www.example.com/category-1/ redirect to: https://www.example.com/category-1 I cant see this is the list of 301 redirects within the redirect panel in Magento but Moz & another redirect checker is picking it up. I am missing a setting or something ? Many Thanks,
Technical SEO | | PaddyM556
Pat0 -
Adding your sitemap to robots.txt
Hi everyone, Best practice question: When adding your sitemap to your robots.txt file, do you add the whole sitemap at once or do you add different subcategories (products, posts, categories,..) separately? I'm very curious to hear your thoughts!
Technical SEO | | WeAreDigital_BE0 -
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. He has since blocked these php pages via robots.txt disallow. Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Or will the robots.txt keep Google away and we lose all these high quality backlinks? I guess the same question applies if we use the canonical tag instead of the 301. Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
301 redirect relative or absolute path?
Hello everyone, Recently we've changed the URL structure on our website, and of course we had to 301 redirect the old urls to the coresponding new ones. The way the technical guys did this is: "http://www.domain.com/old-url.html" 301 redirect to "/new-url.html"
Technical SEO | | Silviu
meaning as a relative redirect path, not an absolute one like this:
"http://www.domain.com/old-url.html" 301 redirect to "http://www.domain.com/new-url.html" This happened for few thousands urls, and the fact is the organic traffic dropped for those pages after this change. (no other changes were made on these pages and the new urls are as seo friendly as possible, A grade on On-Page Grader). The question is: does the relative redirect negatively affects seo, or it counts the same as an absolute path redirect? Thanks,
S.0 -
Canonicalization Issue?
Good day! I am not sure if my company has a Canonicalization issue? When typing in www.cushingco.com the site redirects to http://www.cushingco.com/index.shtml A visitor can also type in http://cushingco.com/index.shtml into a web browser and land on our homepage (and the url will be http://www.cushingco.com/index.shtml) A majority of websites that link to our company point to: http://www.cushingco.com/index.shtml We are in the process of cleaning up citations and pulling together a content marketing strategy/editorial calendar. I want to be sure folks interested in linking to us have the right url. Please ask me any questions to help narrow down what we might be doing incorrectly. Thanks in advance!! Jon
Technical SEO | | SEOSponge0 -
Canonical Issue?
Hi, I was using the On Page Report Card Tool here on SEOMOZ for the following page: http://www.priceline.com/eventi-a-kimpton-hotel-new-york-city-new-york-ny-1614979-hd.hotel-reviews-hotel-guides and it claims there is a canonical issue or improper use of it. I looked at the element and it seems to be fine: <link rel="canonical" href="http://www.priceline.com/eventi-a-kimpton-hotel-new-york-city-new-york-ny-1614979-hd.hotel-reviews-hotel-guides" /> Can you spot the issue and how it would be fixed? Thanks. Eddy
Technical SEO | | workathomecareers0 -
Google (GWT) says my homepage and posts are blocked by Robots.txt
I guys.. I have a very annoying issue.. My Wordpress-blog over at www.Trovatten.com has some indexation-problems.. Google Webmaster Tools data:
Technical SEO | | FrederikTrovatten22
GWT says the following: "Sitemap contains urls which are blocked by robots.txt." and shows me my homepage and my blogposts.. This is my Robots.txt: http://www.trovatten.com/robots.txt
"User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/ Do you have any idea why it says that the URL's are being blocked by robots.txt when that looks how it should?
I've read a couple of places that it can be because of a Wordpress Plugin that is creating a virtuel robots.txt, but I can't validate it.. 1. I have set WP-Privacy to crawl my site
2. I have deactivated all WP-plugins and I still get same GWT-Warnings. Looking forward to hear if you have an idea that might work!0