VRL Parameters Question - Exclude? or use a Canonical Tag?
-
I'm trying to figure something out, as I just finished my "new look" to an old website. It uses a custom built shopping cart, and the system worked pretty well until about a year when ranking went down. My primary traffic used to come from top level Brand pages. Each brand gets sorted by the shopping cart and a Parameter extension is added... So customers can click Page 1 , Page 2 , Page 3 etc.
So for example : http://www.xyz.com/brand.html , http://www.xyz.com/brand.html?page=1 , http://www.xyz.com/brand.html?page=2 and so on... The page= is dynamic, therefore the page title, meta's, etc are the same, however the products displayed are different.
I don't want to exclude the parameter page= completely, as the products are different on each page and obviously I would want the products to be indexed. However, at the same time my concern is that have these parameters might be causing some confusion an hence why I noticed a drop in google rankings.
I also want to note - with my market, its not needed to break these pages up to target more specific keywords.
Maybe using something like this would be the appropriate measure?
-
Ah ok now I understand, misread it a bit.
Well, 2 ways to do it then:
1. Rel canonical to a 'view all products page', in this case the rel canonical is a valid option.
2. Implement pagination with rel next/prev - this will work also.
The preferred option would usually be the first, but this does mean that search visitors would normally be landing on the all products view. Depends on how many products each brand has to a degree, how user friendly seeing all the products together is, page load times etc.
Check out this page for a good rundown on the options and implementation.
-
Gotcha on the canonical - that makes sense.
but in terms of the page/structure. essentially, loading 100 products on one page does not look good in my opinion. So use pages so i can display 20-25 products
http://www.xyz.com/brand.html?page=1 , would show the first 20 , http://www.xyz.com/brand.html?page=2 would show the next 20 ... and so on.
depending on the brand... there would be anywhere from 2-8 pages , therefore 2-8 duplicate titles/descriptions and possibly leading to indexing problems.
all of the text content and h1 is only shown on http://www.xyz.com/brand.html , and excluded from anything like http://www.xyz.com/brand.html?page=2 ... this is why i am looking for a way to sort of prioritize the main page for indexing purposes.
~thx
-
Hi,
I'm not sure I understand your page/structure setup. It is only one brand's products per page right? You say the cart sorts the brands, but what does this mean in practice? Does http://www.xyz.com/brand.html?page=2 always show the products of one specific brand or can it be different brands depending on the sorting? If the first then you shouldn't have a problem, if the latter then yes this could be a problem.
Regardless adding the rel canonical as you describe it is not the way you want to go. This is in effect saying you have only one page/brand you want to index (whichever brand is on http://www.xyz.com/brand.html).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What tools and metrics do you use to show a topic's search interest over time?
I have a foundation repair client that is down in leads for the structural repair portion of their business. They have not lost any major rankings, but leads are down compared to last year. They asked if people are searching for this type of work less this year compared to last. I checked Google Trends and Keyword Planner data but found very different results. Is either of these tools accurate, or is there a better tool to use?
Algorithm Updates | | DigitalDivision1 -
Sitemap Question
Hello, I have a website and my sitemap (generated by the Yoast plugin) is set up into three different sections. One thing I noticed was that my homepage isn't in my sitemap. Is this an issue? The homepage is indexed, but does it need to be added to the sitemap in order for it to be crawled? How would I go about adding the homepage to the sitemap?
Algorithm Updates | | WebServiceConsulting.com0 -
Schema.org Microdata or Microformats - Which Should We Use
Hi All, I'm wondering what would be the better alternative - schema.org microdata or microformats. I am aware that search engines such as Google, Yahoo, and Bing recognize Schema.org as the standard. Question is, will it have any negative affect? Our web developer here says that schema.org microdata may result in negative html. I don't think that it will affect our SEO, but I guess that's also something to shed some light on. So, what's the consensus here - should we implement schema.org or go with microformats - or, does it really make any difference?
Algorithm Updates | | CSawatzky1 -
Wordpress Canonical Tag Pointing to Same Page
So I noticed on a few of my clients wordpress tags (via moz) that there are canonical tags on URLs, pointing to that same URL. What is the point of that, and is it harming the website? Is this being done automatically via a plugin? Should I remove the canonical tags or leave as is?
Algorithm Updates | | WebServiceConsulting.com0 -
Using a sites custom code for multiple websites: good or bad?
Is it bad to utilize a custom codebase for multiple websites? Does that play a factor within Google? Also, what about hosting sites with the same custom codebase on the same dedicated server?
Algorithm Updates | | WebServiceConsulting.com0 -
Does Google use data from Gmail to penalize domains and vice versa?
Has anyone noticed issues with Gmail deliverability and spam inboxing happening around the same time as other large Google updates? For example, if Google blasted your site in Panda or Penguin, have anyone seen them use the same judgement across into Gmail deliverability to blacklist your domain?
Algorithm Updates | | Eric_edvisors0 -
How can I use Intuit without getting duplicate content issues
All of my Intuit site show duplicate content on the index pages. How can I avoid this
Algorithm Updates | | onestrohm0 -
Large site with faceted navigation using rel=canonical, but Google still has issues
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question..... "Googlebot found an extremely high number of URLs on your site:" Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them. First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x. So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table. Thanks for those who take the time to respond in advance.
Algorithm Updates | | PeteGregory0