Canonicalize or Block?
-
Hi Mozers,
We have staff profile pages w/ one main URL and then URLs with query parameters and jump links to take you to different parts of the page.
The longer URLs with parameters canonicalize to the main pages but should they also be nonidexed?
Thanks,
Yael
-
Thanks!
-
Got it, thanks!!!
-
Hi Yael
I completely agree - it is pretty much what canonical tags were developed for.
Regards
Nigel
-
Canonical and noindex are contradictory, Yael. It's either.or, never both. And in the case you describe, I doubt you could no-index the versions with parameters without doing it to the main URL as well (since technically they are all the same page code).
What you are describing is the classic use case for canonical tags - the exact same page referred to by multiple different URLs.
Hope that makes sense?
Paul
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking Google from telemetry requests
At Magnet.me we track the items people are viewing in order to optimize our recommendations. As such we fire POST requests back to our backends every few seconds when enough user initiated actions have happened (think about scrolling for example). In order to eliminate bots from distorting statistics we ignore their values serverside. Based on some internal logging, we see that Googlebot is also performing these POST requests in its javascript crawling. In a 7 day period, that amounts to around 800k POST requests. As we are ignoring that data anyhow, and it is quite a number, we considered reducing this for bots. Though, we had several questions about this:
Technical SEO | | rogier_slag
1. Do these requests count towards crawl budgets?
2. If they do, and we'd want to prevent this from happening: what would be the preferred option? Either preventing the request in the frontend code, or blocking the request using a robots.txt line? The latter question is given by the fact that a in-app block for the request could lead to different behaviour for users and bots, and may be Google could penalize that as cloaking. The latter is slightly less convenient from a development perspective, as all logic is spread throughout the application. I'm aware one should not cloak, or makes pages appear differently to search engine crawlers. However these requests do not change anything in the pages behaviour, and purely send some anonymous data so we can improve future recommendations.0 -
What are the negative implications of listing URLs in a sitemap that are then blocked in the robots.txt?
In running a crawl of a client's site I can see several URLs listed in the sitemap that are then blocked in the robots.txt file. Other than perhaps using up crawl budget, are there any other negative implications?
Technical SEO | | richdan0 -
I'm thinking I might need to canonicalize back to the home site and combine some content, what do you think?
I have a site that is mostly just podcasts with transcripts, and it has both audio and video versions of the podcasts. I also have a blog that I contribute to that links back to the video/transcript page of these podcasts. So this blog I contribute to has the exact same content (the podcast; both audio and video but no transcript) and then an audio and video version of this podcast. Each post of the podcast has different content on it that is technically unique but I'm not sure it's unique enough. So my question is, should I canonicalize the posts on this blog back to the original video/transcript page of the podcast and then combine the video with the audio posts. Thanks!
Technical SEO | | ThridHour0 -
Blocked by meta-robots but there is no robots file
OK, I'm a little frustred here. I've waited a week for the next weekly index to take place after changing the privacy setting in a wordpress website so Google can index, but I still got the same problem. Blocked by meta-robots, no index, no follow. But I do not see a robot file anywhere and the privacy setting in this Wordpress site is set to allow search engines to index this site. Website is www.marketalert.ca What am I missing here? Why can't I index the rest of the website and is there a faster way to test this rather than wait another week just to find out it didn't work again?
Technical SEO | | Twinbytes0 -
Linking C Class Blocks Problem
Hi 🙂 I've just discovered that my client, who has a medical practice, has created a series of micro sites about their doctors (around 10 or so). The problem is that they're on a shared host with the same C-class, providing no real link benefit at all. Would it be best to: A) Look for seperate C class hosts for each site & migrate B) Recreate the pages on the main site & 301 all doctor micro sites to new pages C) Leave as is and pursue other link building activites? Has anyone run into a similar issue before? Thanks a bunch! Woj
Technical SEO | | wojkwasi0 -
How to block/notify google that your domain has been added to sites with very low trustworthiness?
Hey Guys, I am writing to SEOmoz community because a problem occurred which I do not know how to solve: My domain (xyz.com) occured on very strange sites with very low trustworthiness (even blocked by google). Checking the site, I found out that all of the pictures were ALT=xyz.com. Could this hurt my position of my site on google rankings? How to prevent such actions, what should I do? Thanks for you help in advance!
Technical SEO | | Kajmany0 -
Mobile SEO or Block Crawlers?
We're in the process of launching mobile versions of many of our brand sites and our ecommerce site and one of our partners suggested that we should block crawlers on the mobile view so it doesn't compete for the same keywords as the standard site (We will be automatically redirecting mobile handsets to the mobile site). Does this advice make sense? It seems counterintuitive to me.
Technical SEO | | BruceMillard0 -
Best blocking solution for Google
Posting this for Dave SottimanoI Here's the scenario: You've got a set of URLs indexed by Google, and you want them out quickly Once you've managed to remove them, you want to block Googlebot from crawling them again - for whatever reason. Below is a sample of the URLs you want blocked, but you only want to block /beerbottles/ and anything past it: www.example.com/beers/brandofbeer/beerbottles/1 www.example.com/beers/brandofbeer/beerbottles/2 www.example.com/beers/brandofbeer/beerbottles/3 etc.. To remove the pages from the index should you?: Add the Meta=noindex,follow tag to each URL you want de-indexed Use GWT to help remove the pages Wait for Google to crawl again If that's successful, to block Googlebot from crawling again - should you?: Add this line to Robots.txt: DISALLOW */beerbottles/ Or add this line: DISALLOW: /beerbottles/ "To add the * or not to add the *, that is the question" Thanks! Dave
Technical SEO | | goodnewscowboy0