Unsolved Capturing Source Dynamically for UTM Parameters
-
Does anyone have a tutorial on how to dynamically capture the referring source to be populated in UTM parameters for Google Analytics?
We want to syndicate content and be able to see all of the websites that provided referral traffic for this specific objective. We want to set a specific utm_medium and utm_campaign but have the utm_source be dynamic and capture the referring website.
If we set a permanent utm_source, it would appear the same for all incoming traffic.
Thanks in advance!
-
@peteboyd said in Capturing Source Dynamically for UTM Parameters:
Thanks in advance!
UTM (Urchin Tracking Module) parameters are tags that you can add to the end of a URL in order to track the effectiveness of your marketing campaigns. These parameters are used by Google Analytics to help you understand how users are interacting with your website and where they are coming from.
There are five different UTM parameters that you can use:
utm_source: This parameter specifies the source of the traffic, such as "google" or "Facebook".
utm_medium: This parameter specifies the medium of the traffic, such as "cpc" (cost-per-click) or "social".
utm_campaign: This parameter specifies the name of the campaign, such as "spring_sale" or "promotion".
utm_term: This parameter specifies the term or keywords used in the campaign, such as "shoes" or "dress".
utm_content: This parameter specifies the content of the ad, such as the headline or the call-to-action.
To capture the source dynamically for UTM parameters, you can use JavaScript to get the value of the document. referrer property. This property returns the URL of the page that is linked to the current page. You can then use this value to set the utm_source parameter dynamically.
For example, you might use the following code to set the utm_source parameter based on the referring URL:
Copy code
var utmSource = '';if (document.referrer.indexOf('google') !== -1) {
utmSource = 'google';
} else if (document.referrer.indexOf('facebook') !== -1) {
utmSource = 'facebook';
}// Add the utm_source parameter to the URL
var url = 'http://www.example.com?utm_source=' + utmSource;
This code will set the utm_source parameter to "google" if the user came to the page from a Google search or to "Facebook" if the user came to the page from Facebook. If the user came to the page from another source, the utm_source parameter will be left empty.You can then use this modified URL in your marketing campaigns to track the effectiveness of your campaigns and understand where your traffic is coming from.
-
@peteboyd you can refer to this tutorial: https://www.growwithom.com/2020/06/16/track-dynamic-traffic-google-tag-manager/
Should meet your requirements perfectly - using GTM to replace a static value with the url in your UTM Source.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Setting Up Ecommerce Functionalty for the First Time
Morning Mozers!
Technical SEO | | CheapyPP
We are running up against a technical url structure issue with the addition of eCommerce pages . We are hoping you can point us in the right direction. We operate a printing company so all our current product info page are structured like: website/printing/business-cards
website/printing/rackcards
website/printing/etc The ecommerce functionality needs to go into a sub folder but the question is what should we name it? this how the urls would look like in the main category and product pages
/business-cards
/business-cards/full-uv-coaing-both-sides we were thinking either going with /order
website/order/business-cards
website/order/business-cards/full-uv-coaing-both-sides or maybe shop/ or/print-order/ etc Any ideas or suggestions?0 -
Unsolved Almost every new page become Discovered - currently not indexed
Almost every new page that I create becomes Discovered - currently not indexed. It started a couple of months ago, before that all pages were indexed within a couple of weeks. Now there are pages that have not been indexed since the beginning of September. From a technical point of view, the pages are fine and acceptable for a Google bot. The pages are in the sitemap and have content. Basically, these are texts of 1000+ or 2000+ words. I've tried adding new content to pages and even transferring content to a new page with a different url. But in this way, I managed to index only a couple of pages. Has anyone encountered a similar problem?
Product Support | | roadlexx
Could it be that until September of this year, I hadn't added new content to the site for several months?
Please help, I am already losing my heart.0 -
Query string parameters always bad for SEO?
I've recently put some query string parameters into links leading to a 'request a quote' form which auto-fill the 'product' field with the name of the product that is on the referring product page. E.g. Red Bicycle product page >>> Link to RFQ form contains '?productname=Red-Bicycle' >>>> form's product field's default value becomes 'Red-Bicycle' I know url parameters can lead to keyword cannibalisation and duplicate content, we use sub-domains for our language changer. BUT for something like this, am I potentially damaging our SEO? Appreciate I've not explained this very well. We're using Kentico by the way, so K# macros are a possibility (I use a simple one to fill the form's Default Field).
Technical SEO | | landport0 -
What are the SEO recommendations for dynamic, personalised page content? (not e-commerce)
Hi, We will have pages on the website that will display different page copy and images for different user personas. The main content (copy, headings, images) will be supplied dynamically and I'm not sure how Google will index the B and C variations of these pages. As far as I know, the page URL won't change and won't have parameters. Google will crawl and index the page content that comes from JavaScript but I don't know which version of the page copy the search robot will index. If we set user agent filters and serve the default page copy to search robots, we might risk having a cloak penalty because users get different content than search robots. Is it better to have URL parameters for version B and C of the content? For example: /page for the default content /page?id=2 for the B version /page?id=3 for the C version The dynamic content comes from the server side, so not all pages copy variations are in the default HTML. I hope my questions make sense. I couldn't find recommendations for this kind of SEO issue.
Technical SEO | | Gyorgy.B1 -
Affiliate Link is Trumping Homepage - URL parameter handling?
An odd and slightly scary thing happened today: we saw an affiliate string version of our homepage ranking number one for our brand, along with the normal full set of site-links. We have done the following: 1. Added this to our robots.txt : User-agent: *
Technical SEO | | LawrenceNeal
Disallow: /*? 2. Reinserted a canonical on the homepage (we had removed this when we implemented hreflang as had read the two interfered with each other. We haven't had canonical for a long time now without issue. Is this anything to do with the algo update perhaps?! The third thing we're reviewing I'm slightly confused about: URL Parameter Handling in GWT. As advised - with regard to affiliate strings - to the question: "Does this parameter change page content seen by the user?" We have NO selected, which means they should be crawling one representative URL. But isn't it the case that we don't want them crawling or indexing ANY affiliate URLs? You can specify Googlebot to not crawl any of particular string, but only if you select: "Yes. The parameter changes the page content." Should they know an affiliate URL from the original and not index them? I read a quote from Matt Cutts which suggested this (along with putting a "nofollow" tag in affiliate links just in case) Any advice in this area would be appreciated. Thanks.0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
If two websites pull the same content from the same source in a CMS, does it count as duplicate content?
I have a client who wants to publish the same information about a hotel (summary, bullet list of amenities, roughly 200 words + images) to two different websites that they own. One is their main company website where the goal is booking, the other is a special program where that hotel is featured as an option for booking under this special promotion. Both websites are pulling the same content file from a centralized CMS, but they are different domains. My question is two fold: • To a search engine does this count as duplicate content? • If it does, is there a way to configure the publishing of this content to avoid SEO penalties (such as a feed of content to the microsite, etc.) or should the content be written uniquely from one site to the next? Any help you can offer would be greatly appreciated.
Technical SEO | | HeadwatersContent0 -
Canonical for stupid _GET parameters or not? [deep technical details]
Hi, Im currently working on www.kupwakacje.pl which is something like travel agency. People can search for holidays and buy/reserve them. I do know plenty of problems on my website, and thx to seomoz hopefully I will be able to fix them but one is crucial and it's kind of hard to fix I think. The search engine is provided by external party in form of simple API which is in the end responding with formatted HTML - which is completly stupid and pointless, but that's not the main problem. Let's dive in: So for example the visitor goes to homepage, selects Egypt and hit search button. He will be redirected to www.kupwakacje.pl/wczasy-egipt/?ep3[]=%3Fsp%3D3%26a%3D2%26kt%3D%26sd%3D10.06.2011%26drt%3D30%26drf%3D0%26px and this is not a joke 😉 'wczasy-egipt' is my invention obviously and it means 'holidays-egypt'. I've tried to at least have 'something' in the url that makes google think it's related to Egypt indeed. Rest which is the complicated ep3[] thingy is a bunch of encoded parameters. This thing renders in first step a list of hotels, in next one hotel specific offer and in next one the reservation page. Problem is that all those links generated by this so-called API are only changing subparameters in ep3[] parameter so for example clicking on a single hotel changes to url to: www.kupwakacje.p/wczasy-egipt/?url=wczasy-egipt/&ep3[]=%3Fsid%3Db5onrj4hdnspb5eku4s2iqm1g3lomq91%26l ang%3Dpl%26drt%3D30%26sd%3D10.06.2011%26ed%3D30.12.1999%26px%3D99999 %26dsr%3D11%253A%26ds%3D11%253A%26sp%3D which is obviously looking not very different to the first one. what I would like to know is shall i make all pages starting with 'wczasy-egipt' a rel-canonical to the first one (www.kupwakacje.pl/wczasy-egipt) or shoudn't I? google recognizes the webpage according to webmasters central, and recognizes the url but responses with mass duplicate content. What about positioning my website for the hotel names - so long tail optimalization? I know it's a long and complicated post, thx for reading and I would be very happy with any tip or response.
Technical SEO | | macic0