URL Formatting for Internal Link Tagging
-
After doing some research on internal campaign link tagging, I have seen conflicting viewpoints from analytics and SEO professionals regarding the most effective and SEO-friendly way to tag internal links for a large ecommerce site.
It seems there are several common methods of tagging internal links, which can alter how Google interprets these links and indexes the URLs these links point to.
- Query Parameter - Using ? or & to separate a parameter like cid that will be appended to all internal-pointing links. Since Google will crawl and index these, I believe this method has the potential of causing duplicate content.
- Hash - Using # to separate a parameter like cid that will be appended to all internal-pointing links.
- Javascript - Using an onclick event to pass tracking data to your analytics platform
- Not Tagging Internal Links - While this method will provide the cleanest possible internal link paths for Google and users navigating the site and prevent duplicate content issues, analytics will be less effective.
For those of you that manage SEO or analytics for large (1 million+ visits per month) ecommerce sites, what method do you employ and why?
Edit* - For this discussion, I am only concerned with tagging links within the site that point to other pages within the same site - not links that come from outside the site or lead offsite.
Thank you
-
Thanks for the answer. I will be using Ensighten to manage the link tagging, so any of the options I presented can be utilized.
Have you found any negative consequences associated with using the query parameter to tag internal links, such as duplicate pages indexed? Or does the Webmaster Tools setup and canonical tags eliminate the chance for this happening?
The query parameter is also used in the URL outside of tagging (for sorting, brand, and other refiners). Is it possible to specify that you want certain query parameter-appended URLs crawled and not others?
-
I manage a large traffic ecommerce website and use Google Tag Manager to tag all internal links for review in GA.
If you want to use a query parameter, which I also actively use, you can apply settings in Google Webmaster Tools to ignore them and not cause duplicate content issues.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is this GA URL I keep seeing?
According to google analytics, most of my traffic goes to two URLs: "/site/kempruge" and "/" now, I'm pretty sure both of these are taking people to my homepage, but I do not understand the "/site/kempruge' one at all. When I type www.kempruge.com/site/kemprure it 404's, so I'm not sure what URL it even is? Also, I'm wondering if this url is hurting my website in some way? I attached the screenshots, but they didn't load properly. I'll try to add another one in the replies. Thanks! Ruben [
Reporting & Analytics | | KempRugeLawGroup](<a href=)" target="_blank">a>
0 -
Re-Launched Website: Developer Fogot to Remove noindex tags.
Our company's website has maintained decent rankings for the last 12 years we've been in business for our primary keywords. We recently had our website rebuilt from the ground up, and the developers left the noindex tags on all of our 400+ pages when we launched it. I didn't catch the error for 6 days. During which time, I used the Fetch feature in Google, submitting a site-wide fetch, as well as manual submissions for our top 100 URLs . In addition, every page that was indexed previously had a 301 set up for it, which was pointing to a destination with a noindex.
Reporting & Analytics | | yogitrout1
I caught the error today, and the developer removed the tags. Does anyone have any experience with a situation similar to this? In the SERPs, we are still ranking at this moment, and it's displaying our old URLs, and they are 301 redirecting just fine. But, what happens now? For 6 full days, we told Google not to index any of our pages, while also using the Fetch feature, contradicting ourselves.
Any words of wisdom or advice as to what I can do at this point to avoid potential fall out? Thanks0 -
Google Analytics Title tag vs landing page visitors numbers
Hi folks, Just wondering if anyone has any ideas as to why im getting different results in Google analytics. I'm using the Content Efficiency Analysis Report from http://www.kaushik.net which is absolutely awesome. When I search via my title tag I get 920 Unique Visitors over the month but when I search via the landing page URL with the same title tag I get 28. Any ideas to why their should be such a difference. I've also noticed that on that page i'm also getting a Rel Cononical TRUE using a site crawl. Any ideas are much appreciated
Reporting & Analytics | | acs1110 -
Google News traffic spike mystery; referring URLs all blank, Omniture tags didn't fire.
Our content is occasionally featured in Google News. We recently have had two episodes where this happened, but (a) nearly all the referring URLs were blank, and (b) our backend logs show 3-4x more requests for the article in question than Omniture does. In other words, hundreds of thousands of visitors requested a URL from our site (as proven by the traffic logs), but don't seem to have come from Google News (because HTTP_REFERER was blank), and didn't execute the onpage javascript tag to notify Omniture of the pageview. Perhaps this has nothing to do with Google News, but it is too strong a coincidence that the two times we were on there recently, the same thing happened: big backend traffic spike that is not seen by Omniture. It is as if Google News causes browsers to pre-fetch our article without executing the javascript on the page. And without sending a referring URL. Has anyone else seen anything like this before? Stats from the recent episode:
Reporting & Analytics | | mcglynn
- 835,000 HTTP requests for the article URL (logged by our servers) - these requests came from 280,000 distinct IP addresses (70% US) - the #1 referring URL is blank. This accounts for 99.4% of requests. Which, in itself, is hard to believe. These people had to come from somewhere. I believe browsers don't pass HTTP_REFERER when you click from an SSL page to a non-SSL page, but I think Google News doesn't bounce users to SSL by default.That said, we do see other content pages with 70-90% blank referring URLs. Rarely 99+% though.0 -
My first campaign identidied long URLs
Hello! 🙂 I've just created my first campaign, and the crawling proccess have detected posts with long URL (more than 70 characters). If I change it, i.e., alter the URL's, can some problem happens to my blog? Or do I have to disconsider this problem and just "work correctly" from now on? Thanks in advance for your help!
Reporting & Analytics | | Andarilho0 -
Do Google Analytics filters affect link building?
We recently made a few link wheels for specific product pages. We've been having great results with all of the wheels except for one. The one we are having issues with is the only link that we were using a Google Analytics filter on; it looks like this http://domain.com/page.htm?zSource=Specific Keyword%tracking My question is does Google ignore links that are obviously utilizing their analyitcs custom filters? We're doing some more testing to try to find out if it truly is the link or if the wheel is a bad one. There are so many things that could go wrong but with so many of our link wheels working well, I wonder if the filters are what is causing such results.
Reporting & Analytics | | MichealGooden0 -
Meta Robots Tag - What's it really mean?
I used on a handful of pages recently and noticed that they're still popping up in the Google search index. I'd like to keep these from appearing, so I figured I needed a directive statement with stronger semantic meaning. From what I understand, is what I'm looking for. Using this will keep Google from not only crawling the page, but indexing the page, as well. I decided to see what the official robotstxt.org website said about it, so I checked (link here): the NOFOLLOW directive only applies to links on this page. It's entirely likely that a robot might find the same links on some other page without a NOFOLLOW (perhaps on some other site), and so still arrives at your undesired page. So, is their explanation saying that the page itself will be indexed, but the content / links on it won't be followed / indexed? Let me hear your thoughts, mozzers.
Reporting & Analytics | | mudbugmedia0 -
High percentage of nofollow links
Hi, I've just created my first campaign and noticed that on the competitive analysis our website is having a lot of nofollow links: more than 50% I did some research on the web to learn more about nofollow links, but I don't understand why this percentage is so big especially compared to the other websites in the analysis (less than 10%)? Anyone any ideas? Thanks!
Reporting & Analytics | | poupette0