Canonical URLs for Search Parameters
-
Hi Guys
Our seomoz campaign report is returning a lot or Rel Canonical issues similar to this for each page. The non / version redirects to the / version but how do I get the ones with search parameters ie '?datefrom&nights' to redirect.
http://www.lamangaclubresort.co.uk/accommodations/las-brisas-78
http://www.lamangaclubresort.co.uk/accommodations/las-brisas-78/
http://www.lamangaclubresort.co.uk/accommodations/las-brisas-78/?datefrom&nights
http://www.lamangaclubresort.co.uk/accommodations/las-brisas-78/?datefrom=&nights=Any help would be welcome, thanks
-
I'm not seeing any evidence that Google is indexing multiple versions, and your canonical tags appear to be correct. Practically speaking, I think you're ok here.
I'd make sure that you're at least linking consistently (internally) to the "/" version and not both the "/" and no-slash version. If you're using both links, it could confuse crawlers (and our tools). Your search parameters should be ok, though. I don't see any evidence that those have been indexed. See this query:
site:lamangaclubresort.co.uk inurl:datefrom
-
Just checked and found we do have the canonical tag on our pages
Does this mean that I can ignore the reported canonical issue in the Seomoz report?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz-Specific 404 Errors Jumped with URLs that don't exist
Hello, I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent. So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that: OURSITE.com is owned by COMPANY1 which is owned by AGENCY1 This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url. So, let's say one post is: OURSITE.com/the-article/ This morning we have an error in MOZ for URLs OURSITE.com/the-article/COMPANY1 OURSITE.com/the-article/AGENCY1 x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere. These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them. Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
Moz Pro | | CJolicoeur0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Estimating Search Volume An Impossibility?
I use Moz's handy Keyword Difficulty Tool to get a relative feel for difficulty and traffic and specif competition. At the same time, my experience is that a term may show a local (U.S.) search volume of, for instance, 30 visits and in the end produce hundreds of visits for position #3. This is of course accomplished through the miracle of all the other searches the page may be judged by Google to be relevant for. Hundreds at times and some times few if any. Here is my two part question: What tools or steps do you use to get a better handle on this on the front side of going for a term? What tools or steps do you use to broaden the meaning to related terms/searches over time? Thanks... Darcy
Moz Pro | | 945010 -
How do I search for keywords or events within my facebook and twitter pages?
Hi all, I work for a bicycle component manufacturer. We have a group of different facebook and twitter pages each covering different aspects of our market ie: mountain biking, road biking, and urban biking. I am wondering if it is possible to apply the keywords function of moz to my firm's individual facebook and twitter pages? I would like to be able to search for specific products that we have launched in the past. I am trying to collect the amount of social media responses we have received for certain products. It would be great if I could even search for sentiments about the products we have launched. Thank you!
Moz Pro | | mdaysram0 -
SEO Web Crawler - Referrer Lists XML Sitemap URL
Hello!, I recently ran the crawl tool on a client site. Opening up the file, I noticed that the referring URLs listed are my XML sitemaps and not (X)HTML pages. Any reason or thoughts behind why this is happening? Thanks!
Moz Pro | | MorpheusMedia0 -
Why do I keep getting "more than one canonical URL tag" on-page factor when, in fact, there is always only one?
The following are pages that SEOMOZ says have "more than one canonical URL tag" but they all have only one. Can someone help me understand this?http://www.lasercenterny.com/Laser-Hair-Removal-Binghamton/tabid/1950/Default.aspxhttp://www.lasercenterny.com/Hair-Removal-Binghamton-NY/tabid/1949/Default.aspxhttp://www.lasercenterny.com/Hair-Removal-Binghamton/tabid/1948/Default.aspx
Moz Pro | | SmartWebPros0 -
Excluding parameters from seomoz crawl?
I'm getting a ton of duplicate content errors because almost all of my pages feature a "print this page" link that adds the parameter "printable=Y" to the URL and displays a plain text version of the same page. Is there any way to exclude these pages from the crawl results?
Moz Pro | | AmericanOutlets0 -
Idea for the Best Method of getting REAL Keyword Search Data
I've been using googles keyword tools to get keyword search data, but I'm pretty sceptical of some of the results it throws out on number of searches. On a number of keywords - particularly when you get to longtail, I've seen results which cannot possibly be right. So - I am trying to find out a way of getting REAL data I can trust. Here are my thoughts on a possible method of getting this key data:
Moz Pro | | James77
1/. Firstly I am taking on the assumption that the adwords campaign monitoring statistics are more reliable and accurate than the keyword research tools - correct me if I am wrong! 2/. Then take a load of keywords you wish to monitor and create didfferent ads for them based of EXACT match. 3/. Make sure non of your adverts conflict with each other, so you may have to turn off any other campaigns you have running. 4/. Put you bids and available spend as high as possible, so that ideally you will your ads will always be shown for every search match, and ideally always come in No1 spot on the ads. IE your aim is to create a neutral environment where your ads are shown for every search match and appear in the exact same position for each search match. Run the campaign for a period of time long enough to be confident that you have enough search data. From this you should have the vital data on keyword searches done on your exact match keywords. 5/. Repeat the test, but this time use phrase matches and again make sure there are no conflicts. 6/. Repeat again with broad match - this would require very careful implimentation not to have any conflicts and you would likely need to make heavy use of the "Not Include" keywords. What are your thoughts on the above process? - Any flaws, or other better solutions? Obviously one key thing with doing this is you need to have to be prepared to have a decent budget to get this data - but it won't be wasted as you will also be getting the adwords traffic. Thanks0