Magento: Moz finding URL and URL?p=1 as duplicate. Solution?
-
Good day Mozzers!
Moz bot is finding URL's in the Catalogue pages with the format www.example.com/something and www.example.com/something?p=1 as duplicate (since they are the same page)
Whats the best solution to implement here? Canonical? Any other?
Cheers!
MozAddict
-
This is a popular plugin for managing parameters and faceted navigation in Magento: http://amasty.com/improved-navigation.html. I have read quite a few reviews on this plugin, and it seems to be referenced quite often (however I don't have experience with it).
This paired with some robots.txt should do the trick.
Hope this helps!
-
This is for the category pages. Since we have many products in the category, there are many pages as well. We are currently using rel next prev, but not canonical. The problem is that Magento generates two URLs for the first category page for all categories:
I dont see how we could block these in robots.txt or parameters, because then they wouldnt be crawled at all?
-
I agree. Use robots to filter out bad urls, and set your url parameters in webmaster tools. Careful if you dont know how to use this, as you can potentially get rid of urls you want. Read up on it first.
-
Is there only one page? Is there a www.example.com/something?p=2 and www.example.com/something?p=3 as well? If this is the case, you may want to look into rel="prev/next".
-
Canonical will help yes, but also if the pages ARE duplicate content, you can block them in robots and let Google know in Webmasters Tools under URL Parameters.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there a way for me to find out how a keyword would rank if it were on a specific site?
Is there a way for me to find out how a keyword would rank if it were on a specific site? For example, lets say that XYZ.com does not have the keyword "ABC". Is there a way for me to find out how the keyword "ABC" would rank if it were on XYZ.com?
Moz Pro | | TurboH0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Moz tools are returning "url is inaccessible"
Hello everyone, I have been trying to use the on page grader tool and I have also tried to do a site crawl test, and both tools have come back with a "Sorry, but that URL is inaccessible" error. This has not been a problem before. Any ideas why this is happening eg what is blocking it. The url is www.livinghouse.co.uk any help for a novice would be appreciated. PS. I have had another tool also not giving any results, so I assume its something on the site which is blocking the tools. Could this also block Google? Thanks Giles
Moz Pro | | livinghouse0 -
How to fix overly dynamic URLs for Volusion site?
We're currently getting over 5439 pages with an 'overly dynamic URL' warning in our Moz scan. The site is run on Volusion. Is there a way to fix this seeming Volusion error?
Moz Pro | | Brandon_Clay0 -
Tools which scan urls for social data
Hi can anyone recommend any tools out there, which can allow me to scan a list of pages (urls) and give me back social data for each page (e.g. number of facebook likes, shares, twitter data, google plus, etc) Cheers, Chris
Moz Pro | | monster990 -
Canonical URLs for Search Parameters
Hi Guys Our seomoz campaign report is returning a lot or Rel Canonical issues similar to this for each page. The non / version redirects to the / version but how do I get the ones with search parameters ie '?datefrom&nights' to redirect. http://www.lamangaclubresort.co.uk/accommodations/las-brisas-78
Moz Pro | | JohnTulley
http://www.lamangaclubresort.co.uk/accommodations/las-brisas-78/
http://www.lamangaclubresort.co.uk/accommodations/las-brisas-78/?datefrom&nights
http://www.lamangaclubresort.co.uk/accommodations/las-brisas-78/?datefrom=&nights= Any help would be welcome, thanks0 -
Why are these pages considered duplicate page content?
A recent crawl diagnostic for a client's website had several new duplicate page content errors. The problem is, I'm not sure where the error comes from since the content in the webpage is different from one another. Here's the pages that SEOMOZ reported to have duplicate page content errors: http://www.imaginet.com.ph/wireless-internet-service-providers-term http://www.imaginet.com.ph/antivirus-term http://www.imaginet.com.ph/berkeley-internet-name-domain http://www.imaginet.com.ph/customer-premises-equipment-term The only thing similar that I see is the headline which says "Glossary Terms Used in this Site" - I hope that the one sentence is the reason for the error. Any input is appreciated as I want to find out the best solution for my client's website errors. Thanks!
Moz Pro | | TheNorthernOffice790