Canonical tags and internal Google search
-
Quick question: I want some pages that will have canonical tags, to show up in internal results for a Google site search that's built into the site. I'm not finished with the site, but is it correct to assume that pages with canonical will NOT show up in internal site search results, when powered by Google?
-
Of note, you can also customize your built-in internal Google site search: http://www.google.com/sitesearch/
"Customize search box and results using XML"
-
The opposite. Pages designated as canonical are more likely to show up in results. Here is the Google answer regarding Rel Canonical: http://www.google.com/support/webmasters/bin/answer.py?answer=139394
Basically canonical tags say to Google "If you find multiple copies of this page, use this version." Note that duplicate content can refer to the page both on and not on the www subdomain, for example:
http://mydomain.com/mypage.html
http://www.mydomain.com/mypage.html
Can be confused by some crawler bots, so using rel canonical everywhere is generally considered a best practice to avoid these situations.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonical error from Google
Moz couldn't explain this properly and I don't understand how to fix it. Google emailed this morning saying "Alternate page with proper canonical tag." Moz also kinda complains about the main URL and the main URL/index.html being duplicate. Of course they are. The main URL doesn't work without the index.html page. What am I missing? How can I fix this to eliminate this duplicate problem which to me isn't a problem?
Technical SEO | | RVForce0 -
Hreflang and canonical
Hi all, I'm hoping someone can help me solve this once and for all! I keep getting hreflang errors on our site crawls and I cannot understand why. Does anything here look off to you? Thank you! JGdWcqu
Technical SEO | | eGInnovations1 -
Carousel of cards at the top of a Google search results page?
When I searched for "mapping software", a carousel of images which displayed a variety of different companies appeared above the results list. Does anyone know what this is and how you go about getting your company into this carousel? The attached image displays the carousel. gRjF1
Technical SEO | | eSpatial0 -
Google rankings strange behaviour - our site can only be found when searching repeatedly
Hello, We are experiencing something very odd at the moment I hope somebody could shed some light on this. The rankings of our site dropped from page 2 to page 15 approx. 9 months ago. At first we thought we had been penalised and filed a consideration request. Google got back to us saying that there was no manual actions applied to our site. We have been working very hard to try to get the ranking up again and it seems to be improving. Now, according to several serps monitoring services, we are on page 2/3 again for the term "holiday lettings". However, the really strange thing is that when we search for this term on Google UK, our site is nowhere to be found. If you then right away hit the search button again searching for the same term, then voila! our website is on www.alphaholidaylettings.com page 2 / 3! We tried this on many different computers at different locations (private and public computers), making sure we have logged out from Google Accounts (so that customised search results are not returned). We even tried the computers at various retail outlets including different Apple stores. The results are the same. Essentially, we are never found when someone search for us for the first time, our site only shows up if you search for the same term for the second or third time. We just could not understand why this is happening. Somebody told me it could be due to "Google dance" when indices on different servers are being updated, but this has now been going on for nearly 3 months. Has anyone experienced similar situations or have any advice? Many thanks!
Technical SEO | | forgottenlife0 -
Do I need both canonical meta tags AND 301 redirects?
I implemented a 301 redirect set to the "www" version in the .htaccess (apache server) file and my logs are DOWN 30-40%! I have to be doing something wrong! AddType application/x-httpd-php .html .htm RewriteCond %{HTTP_HOST} ^luckygemstones.com
Technical SEO | | spkcp111
RewriteRule (.*) http://www.luckygemstones.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^./index.htm
RewriteRule ^(.)index.htm$ http://www.luckygemstones.com/$1 [R=301,L] IndexIgnore *
ErrorDocument 404 http://www.luckygemstones.com/page-not-found.htm
ErrorDocument 500 http://www.luckygemstones.com/internal-serv-error.htm
ErrorDocument 403 http://www.luckygemstones.com/forbidden-request.htm
ErrorDocument 401 http://www.luckygemstones.com/not-authorized.htm I've also started adding canoncial META's to EACH page: I'm using HMTL 4.0 loose still--1000's of pages--painful to convert to HTML5 so I left the / off the tag so it would validate. Am I doing something wrong? Thanks, Kathleen0 -
IP canonization
Hi, I need your opinions about IP canonization. Site www.peoplemaps.com is on 78.136.30.112 IP. Now we redirect that IP to the main page (because of possible duplicate content). But, we have more sites on the same IP address. How can that affect on their SEO? Before redirecting, when we visit that IP address, the browser showed mainpage of www.peoplemaps.com, not any other site. Thanks, Milan edit: We have used 301 redirect.
Technical SEO | | MilanB.0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Sitemaps for Google
In Google Webmaster Central, if a URL is reported in your site map as 404 (Not found), I'm assuming Google will automatically clean it up and that the next time we generate a sitemap, it won't include the 404 URL. Is this true? Do we need to comb through our sitemap files and remove the 404 pages Google finds, our will it "automagically" be cleaned up by Google's next crawl of our site?
Technical SEO | | Prospector-Plastics0