Should we block urls like this - domainname/shop/leather-chairs.html?brand=244&cat=16&dir=ascℴ=price&price=1 within the robots.txt?
-
I've recently added a campaign within the SEOmoz interface and received an alarming number of errors ~9,000 on our eCommerce website. This site was built in Magento, and we are using search friendly url's however most of our errors were duplicate content / titles due to url's like: domainname/shop/leather-chairs.html?brand=244&cat=16&dir=asc&order=price&price=1 and domainname/shop/leather-chairs.html?brand=244&cat=16&dir=asc&order=price&price=4.
Is this hurting us in the search engines? Is rogerbot too good?
What can we do to cut off bots after the ".html?" ? Any help would be much appreciated
-
I had the same problem on http://www.tokenrock.com because I was doing a lot of URL Rewriting, it's a CMS system I wrote, but the same issue apply. I went from 7000+ errors according to SEOMoz, and I'm down to 700. Here's a few things I did:
Use canonicals on everything you possibly can.
Redirect 301 the items in the SERPS that are identical.
I'm not familiar with Magento to help you work though that side of it.
Having a link like: domainname/leather-chairs-244-16-price-1.html would work much better.
The ones you have listed are because somehow somewhere you (the site) have a link to it.
Unfortunately some of the CMS's are written by developers who don't fully understand SEO and why the ? is a bad thing.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt Help
I need help to create robots.txt file. Please let me know what to add in the file. any real example or working example.?
Intermediate & Advanced SEO | | Michael.Leonard0 -
Client wants a seperate .tv domain for their media/videos instead of a subdomain/subfolder. What is the best way to pass of link equity to a new domain?
We have a client that wants to place their video content on a .tv tld instead of a subfolder/subdomain in their .com website. They believe that the .tv domain will better represent the media experience of their business. We can understand this client's position however we are concerned about their .tv domain will lose out on the link equity if it were no longer placed in the .com's subdomain/subfolder. Here are our questions: 1. What would be the best way to pass of link equity from .com website to a new .tv domain? Should we just have a video link on the .com website that 301 directs to the new .tv domain? 2. Is there any SEO benefit of having a .tv domain for Google Video queries or even Youtube? 3. Is there any long term value of having two different websites? For link equity purposes we understand that it would be better if everything was in a .com. However is a .tv domain ideal for a better representation of their media content? We appreciate any feedback.
Intermediate & Advanced SEO | | RosemaryB0 -
Should I use meta noindex and robots.txt disallow?
Hi, we have an alternate "list view" version of every one of our search results pages The list view has its own URL, indicated by a URL parameter I'm concerned about wasting our crawl budget on all these list view pages, which effectively doubles the amount of pages that need crawling When they were first launched, I had the noindex meta tag be placed on all list view pages, but I'm concerned that they are still being crawled Should I therefore go ahead and also apply a robots.txt disallow on that parameter to ensure that no crawling occurs? Or, will Googlebot/Bingbot also stop crawling that page over time? I assume that noindex still means "crawl"... Thanks 🙂
Intermediate & Advanced SEO | | ntcma0 -
Robots.txt help
Hi Moz Community, Google is indexing some developer pages from a previous website where I currently work: ddcblog.dev.examplewebsite.com/categories/sub-categories Was wondering how I include these in a robots.txt file so they no longer appear on Google. Can I do it under our homepage GWT account or do I have to have a separate account set up for these URL types? As always, your expertise is greatly appreciated, -Reed
Intermediate & Advanced SEO | | IceIcebaby0 -
Robots.txt: Syntax URL to disallow
Did someone ever experience some "collateral damages" when it's about "disallowing" some URLs? Some old URLs are still present on our website and while we are "cleaning" them off the site (which takes time), I would like to to avoid their indexation through the robots.txt file. The old URLs syntax is "/brand//13" while the new ones are "/brand/samsung/13." (note that there is 2 slash on the URL after the word "brand") Do I risk to erase from the SERPs the new good URLs if I add to the robots.txt file the line "Disallow: /brand//" ? I don't think so, but thank you to everyone who will be able to help me to clear this out 🙂
Intermediate & Advanced SEO | | Kuantokusta0 -
SEO Overly-Dynamic URL Website with thousands of URLs
Hello, I have a new client who has a Diablo 3 database. They have created a very interesting site in which every "build" is it's own URL. Every page is a list of weapons and gear for the gamer. The reader may love this but it's nightmare for SEO. I have pushed for a blog to help generate inbound links and traffic but overall I feel the main feature of their site is a headache to optimize. They have thousands of pages index in google but none are really their own page. There is no strong content, H-Tags, or any real substance at all. With a lack of definition for each page, Google see's this as a huge ball of mess, with duplicate Page Titles and too many onpage links. The first thing I did was tell them to add a canonical link which seemed to drop the errors down 12K leaving only 2400 left...which is a nice start, but the remaining errors is still a challenge. I'm thinking about seeing if I can either find a way to make each page it's own blurb, H Tag or simple have the Nav bar and all the links in the database Noindex. That way the site is left with only a handful of URLs + the Blog and Forum Thought?
Intermediate & Advanced SEO | | MikePatch0 -
Soft 404's from pages blocked by robots.txt -- cause for concern?
We're seeing soft 404 errors appear in our google webmaster tools section on pages that are blocked by robots.txt (our search result pages). Should we be concerned? Is there anything we can do about this?
Intermediate & Advanced SEO | | nicole.healthline4 -
Robots.txt is blocking Wordpress Pages from Googlebot?
I have a robots.txt file on my server, which I did not develop, it was done by the web designer at the company before me. Then there is a word press plugin that generates a robots.txt file. How Do I unblock all the wordpress pages from googlebot?
Intermediate & Advanced SEO | | ENSO0