Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
-
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
-
Update - Google has crawled this correctly and is returning the correct, redirected page. Meaning, it seems to have understood that we don't want any of the parametered versions indexed ("return representative link") from our original page and all of its campaign-tracked brethren, and is then redirecting from the representative link correctly.
And finally there was peace in the universe...for now. ;> Tim
-
Agree...it feels like leaving a bit to chance, but I'll keep an eye on it over the next few weeks to see what comes of it. We seem to be re-indexed every couple of days, so maybe I can test it out Monday.
BTW, this issue really came up when we were creating a server side 301 redirect for the root URL, and then I got to wondering if we'd need to set up an irule for all parameters. Hopefully not...hopefully Google will figure it out for us.
Thanks Peter. Tim
-
It's really tough to say, but moving away from "Let Google decide" to a more definitive choice seems like a good next step. You know which URL should be canonical, and it's not the parameterized version (if I'm understanding correctly).
If you say "Let Google decide", it seems a bit more like rel=prev/next. Google may allow any page in the set to rank, BUT they won't treat those pages as duplicates, etc. How does this actually impact the PR flow to any given page in that series? We have no idea. They're probably consolidating them on the fly, to some degree. They basically have to be, since the page they choose to rank form the set is query-dependent.
-
This question deals with dynamically created pages, it seems, and Google seems to recommend NOT choosing the "no" option in WMT - choose "yes" when you edit the parameter settings for this and you'll see an option for your case, I think, Christian (I know this is 3 years late, but still).
BUT I have a situation where we use SiteCatalyst to create numerous tracking codes as parameters to a URL. Since there is not a new page being created, we are following Google's advice to select "no" - apparently will:
"group the duplicate URLs into one cluster and select what we think is the "best" URL to represent the cluster in search results. We then consolidate properties of the URLs in the cluster, such as link popularity, to the representative URL."
What worries me is that a) the "root" URL will not be returned, somehow (perhaps due to freakish amount of inbound linking to one of our parametered URLs), and b) the root URL will not be getting the juice. The reason we got suspicious about this problem in the first place was that Google was returning one of our parametered URLs (PA=45) instead of the "root" URL (PA=58).
This may be an anomaly that will be sorted out now that we changed the parameter setting from "Let Google Decide" to "No, page does not change" i.e. return the "Representative" link, but would love your thoughts - esp on the juice passage.
Tim
-
This sounds unusual enough that I'd almost have to see it in action. Is the JS-based URL even getting indexed? This might be a non-issue, honestly. I don't have solid evidence either way about GWT blocking passing link-juice, although I suspect it behaves like a canonical in most cases.
-
I agree. The URL parameter option seems to be the best solution since this is not a unique page. It is the main page with javascript that calls for additional content to be displayed in the form of a lightbox overlay if the condition is right. Since it is not an actual page, I cannot add the rel-canonical statement to the header. It is not clear however, whether the link juice will be passed with this parameter setting in Webmaster Tools.
-
If you're already use rel-canonical, then there's really no reason to also block the parameter. Rel-canonical will preserve any link-juice, and will also keep the page available to visitors (unlike a 301-redirect).
Are you seeing a lot of these pages indexed (i.e. is the canonical tag not working)? You could block the parameter in that case, but my gut reaction is that it's unnecessary and probably counter-productive. Google may just need time to de-index (it can be a slow process).
I suspect that Google passes some link-juice through blocked parameters and treats it more like a canonical, but it may be situational and I haven't seen good data on that. So many things in Google Webmaster Tools end up being a bit of a black box. Typically, I view it as a last resort.
-
I can just repeat myself: Set Crawl to yes and use rel canonical with website.com/?v3 pointing to website.com
-
My fault for not being clear.
I understand that the rel=canonical cannot be added to the robot.txt file. We are already using the canonical statement.
I do not want to add the page with the url parameter to the robot.txt file as that would prevent the link juice from being passed.
Perhaps this example will help clarify:
URL = website.com
ULR parameter = website.com/?v3
website.com/?v3 has a lot of backlinks. How can I pass the link juice to website.com and Not have website.com/?v3 appear in the SERP"s?
-
I'm getting a bit lost with your explanation, maybe it would be easier if I saw the urls, but here"s a brief:
I would not use parameters at all. Cleen urls are best for seo, remove everything not needed. You definately don't need an url parameter to indicate that content is unique for 25%of traffic. (I got a little bit lost here: how can a content be unique for just part of your traffic. If it is found elsewhere on your pae it is not unique, if it is not found elswehere, it is unique) So anyway those url parameters do not indicate nothing to google, just stuff your url structure with useles info (for google) so why use them?
I am already using a link rel=canonical statement. I don't want to add this to the robots.txt file as that would prevent the juice from being passed.
I totally don't get this one. You can't add canonical to robots.txt. This is not a robots.txt statement.
To sum up: If you do not want your parametered page to appear in the serps than as I said: Set Crawl to yes! and use rel canonical. This way page will no more apperar in serps, but will be available for readers and will pass link juice.
-
The parameter to this URL specifies unique content for 25% of my traffic to the home page. If I use a 301 redirect than those people will not see the unique content that is relevant to them. But since this parameter is only relevant to 25% of my traffic, I would like the main URL displayed in the SERPs rather then the unique one.
Google's Webmaster Tools let you choose how you would Google to handle URL parameters. When using this tool you must specify the parameters effect on content. You can then specify what you would like googlebot to crawl. If I say NO crawl, I understand that the page with this parameter will not be crawled but will the link juice be passed to the page without the parameter?
I am already using a link rel=canonical statement. I don't want to add this url parameter to the robots.txt file either as that would prevent the juice from being passed.
What is the best way to keep this parameter and pass the juice to the main page but not have the URL parameter displayed in the SERPs?
-
What do you men by url parameter specifies content?
If a page is not crawled it definately won't pass link juice. Set Crawl to yes and use rel canonical: http://www.youtube.com/watch?v=Cm9onOGTgeM
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Not Indexing Pages (Wordpress)
Hello, recently I started noticing that google is not indexing our new pages or our new blog posts. We are simply getting a "Discovered - Currently Not Indexed" message on all new pages. When I click "Request Indexing" is takes a few days, but eventually it does get indexed and is on Google. This is very strange, as our website has been around since the late 90's and the quality of the new content is neither duplicate nor "low quality". We started noticing this happening around February. We also do not have many pages - maybe 500 maximum? I have looked at all the obvious answers (allowing for indexing, etc.), but just can't seem to pinpoint a reason why. Has anyone had this happen recently? It is getting very annoying having to manually go in and request indexing for every page and makes me think there may be some underlying issues with the website that should be fixed.
Technical SEO | | Hasanovic1 -
Google is indexing bad URLS
Hi All, The site I am working on is built on Wordpress. The plugin Revolution Slider was downloaded. While no longer utilized, it still remained on the site for some time. This plugin began creating hundreds of URLs containing nothing but code on the page. I noticed these URLs were being indexed by Google. The URLs follow the structure: www.mysite.com/wp-content/uploads/revslider/templates/this-part-changes/ I have done the following to prevent these URLs from being created & indexed: 1. Added a directive in my Htaccess to 404 all of these URLs 2. Blocked /wp-content/uploads/revslider/ in my robots.txt 3. Manually de-inedex each URL using the GSC tool 4. Deleted the plugin However, new URLs still appear in Google's index, despite being blocked by robots.txt and resolving to a 404. Can anyone suggest any next steps? I Thanks!
Technical SEO | | Tom3_150 -
Can I use a 301 redirect to pass 'back link' juice to a different domain?
Hi, I have a backlink from a high DA/PA Government Website pointing to www.domainA.com which I own and can setup 301 redirects on if necessary. However my www.domainA.com is not used and has no active website (but has hosting available which can 301 redirect). www.domainA.com is also contextually irrelevant to the backlink. I want the Government Website link to go to www.domainB.com - which is both the relevant site and which also should be benefiting from from the seo juice from the backlink. So far I have had no luck to get the Government Website's administrators to change the URL on the link to point to www.domainB.com. Q1: If i use a 301 redirect on www.domainA.com to redirect to www.domainB.com will most of the backlink's SEO juice still be passed on to www.domainB.com? Q2: If the answer to the above is yes - would there be benefit to taking this a step further and redirect www.domainA.com to a deeper directory on www.domianB.com which is even more relevant?
Technical SEO | | DGAU
ie. redirect www.domainA.com to www.domainB.com/categoryB - passing the link juice deeper.0 -
How can I stop a tracking link from being indexed while still passing link equity?
I have a marketing campaign landing page and it uses a tracking URL to track clicks. The tracking links look something like this: http://this-is-the-origin-url.com/clkn/http/destination-url.com/ The problem is that Google is indexing these links as pages in the SERPs. Of course when they get indexed and then clicked, they show a 400 error because the /clkn/ link doesn't represent an actual page with content on it. The tracking link is set up to instantly 301 redirect to http://destination-url.com. Right now my dev team has blocked these links from crawlers by adding Disallow: /clkn/ in the robots.txt file, however, this blocks the flow of link equity to the destination page. How can I stop these links from being indexed without blocking the flow of link equity to the destination URL?
Technical SEO | | UnbounceVan0 -
Wrong page title in Google
Hi there, A while ago we took over the domain www.hoesjes.nl and forwarded it to our website www.telefoonhoesjesxl.nl. If you perform a search for the keyword 'hoesjes' in Google then we (www.telefoonhoesjesxl.nl) show up on an organic number 1 position. The problem is that the page title isn't correct. Google shows the page title of the website hoesjes.nl we took over and (correctly?) redirected to our domain www.telefoonhoesjesxl.nl. Does anybody have any idea how to get rid of this wrong page title in Google?
Technical SEO | | MarcelMoz
Here you can find a screenshot of what I mean. Thanks! Marcel0 -
Why is Google Webmaster Tools showing 404 Page Not Found Errors for web pages that don't have anything to do with my site?
I am currently working on a small site with approx 50 web pages. In the crawl error section in WMT Google has highlighted over 10,000 page not found errors for pages that have nothing to do with my site. Anyone come across this before?
Technical SEO | | Pete40 -
Do canonical tags pass all of the link juice onto the URL they point to?
I have an ecommerce website where the category pages have various sorting and paging options which add a suffix to the URLs. My site is setup so the root category URL, domain.com/category-name, has a canonical tag pointing to domain.com/category-name/page1/price however all links, both interner & external, point to the former (i.e. domain.com/category-name). I would like to know whether all of the link juice is being passed onto the canonical tag URL? Otherwise should I change the canonical tag to point the other way? Thanks!
Technical SEO | | tjhossy0 -
Why google index my IP URL
hi guys, a question please. if site:112.65.247.14 , you can see google index our website IP address, this could duplicate with our darwinmarketing.com content pages. i am not quite sure why google index my IP pages while index domain pages, i understand this could because of backlink, internal link and etc, but i don't see obvious issues there, also i have submit request to google team to remove ip address index, but seems no luck. Please do you have any other suggestion on this? i was trying to do change of address setting in Google Webmaster Tools, but didn't allow as it said "Restricted to root level domains only", any ideas? Thank you! boson
Technical SEO | | DarwinChinaSEO0