Duplicate content resulting from js redirect?
-
I recently created a cname (e.g. m.client-site .com) and added some js (supplied by
mobile site vendor to the head which is designed to detect if the user agent is a mobi device or not. This is part of the js:
var CurrentUrl = location.href var noredirect = document.location.search; if (noredirect.indexOf("no_redirect=true") < 0){ if ((navigator.userAgent.match(/(iPhone|iPod|BlackBerry|Android.*Mobile|webOS|Window
Now... Webmaster Tools is indicating 2 url versions for each page on the site - for example:
1.) /content-page.html
2.) /content-page.html?no_redirect=true
and resulting in duplicate page titles and meta descriptions.
I am not quite adept enough at either js or htaccess to really grasp what's going on here... so an explanation of why this is occurring and how to deal with it would be appreciated!
-
You're welcome
-
That makes perfect sense. I think I will try instructing webmaster to ignore the variable as you initially suggested. It's the quickest approach
Thank you very much for your time and wisdom - much appreciated!
Dino
-
I'm not great with JS myself - I'm lucky enough to employ people to do that for me! However, here is what the script is doing:
- First check whether "no_direct=true" has been set - presumably to allow users to override the mobile version and view the full desktop version if they choose
- If that hasn't been set then look to see if they are using iPhone/iPod/Blackberry/Android browsers
- Presumably the next line is then redirecting.
That seems fairly logical - no real problem there. However the mobile version is getting picked up and indexed somewhere.
Because you want users to have access to that "duplicate" version, but don't want the search engines too you don't really want to either prevent this URL from existing or override it with .htaccess . It would be smarter to pick a method that targets the search engines, such as:
- Stop them crawling it (through webmaster tools or robots.txt)
- Add a no-index tag to it
- Canonical it back to the main content
-
Hi Mat,
Thanks for the response!
I am really trying to understand what is occurring here and how to remedy via js or htaccess.
Can you please provide further insight?
Thank you.
-
The easiest way to fix this is to tell google to ignore the URL variable no_redirect . You can do this in webmaster tools under Configuration > URL parameters. find where no_redirect is listed, click edit and set it to "used for tracking".
Remember to do similar for bing.
You could also block these in robots.txt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Different language with direct translation: duplicate content, meta?
For a site that does NOT want a separate subdomain, or directory, or TLD for a country/language would the directly translated page (static) content/meta be duplicate? (NOT considering a translation of the term/acronym which could exist in another language) i.e. /SEO-city-state in English vs. /SEO-city-state Spanish -In this example a term/acronym that is the same in any language. Outside of duplicate content, are their other conflict potentials in rankings you can think of?
Intermediate & Advanced SEO | | bozzie3110 -
Thin Content to Quality Content
How should i modify content from thin to high quality content. Somehow i realized that my pages where targetted keywords didn't had the keyword density lost a massive ranking after the last update whereas all pages which had the keyword density are ranking good. But my concern is all pages which are ranking good had all the keyword in a single statement like. Get ABC pens, ABC pencils, ABC colors, etc. at the end of a 300 word content describing ABC. Whereas the pages which dropped the rankings had a single keyword repeated just twice in a 500 word article. Can this be the reason for a massive drop. Should i add the single statement like the one which is there on pages ranking good? Is it good to add just a single line once the page is indexed or do i need to get a fresh content once again along with a sentence of keyword i mentioned above?
Intermediate & Advanced SEO | | welcomecure1 -
[E-commerce] Duplicate content due to color variations (canonical/indexing)
Hello, We currently have a lot of color variations on multiple products with almost the same content. Even with our canonicals being set, Moz's crawling tool seems to flag them as duplicate content. What we have done so far: Choosing the best-selling color variation (our "master product") Adding a rel="canonical" to every variation (with our "master product" as the canonical URL) In my opinion, it should be enough to address this issue. However, being given the fact that it's flagged as duplicate by Moz, I was wondering if there is something else we should do? Should we add a "noindex,follow" to our child products and "index,follow" to our master product? (sounds to me like such a heavy change) Thank you in advance
Intermediate & Advanced SEO | | EasyLounge0 -
Duplicate site (disaster recovery) being crawled and creating two indexed search results
I have a primary domain, toptable.co.uk, and a disaster recovery site for this primary domain named uk-www.gtm.opentable.com. In the event of a disaster, toptable.co.uk would get CNAMEd (DNS alias) to the .gtm site. Naturally the .gtm disaster recover domian is an exact match to the toptable.co.uk domain. Unfortunately, Google has crawled the uk-www.gtm.opentable site, and it's showing up in search results. In most cases the gtm urls don't get redirected to toptable they actually appear as an entirely separate domain to the user. The strong feeling is that this duplicate content is hurting toptable.co.uk, especially as .gtm.ot is part of the .opentable.com domain which has significant authority. So we need a way of stopping Google from crawling gtm. There seem to be two potential fixes. Which is best for this case? use the robots.txt to block Google from crawling the .gtm site 2) canonicalize the the gtm urls to toptable.co.uk In general Google seems to recommend a canonical change but in this special case it seems robot.txt change could be best. Thanks in advance to the SEOmoz community!
Intermediate & Advanced SEO | | OpenTable0 -
Joomla duplicate content
My website report says http://www.enigmacrea.com/diseno-grafico-portafolio-publicidad and http://www.enigmacrea.com/diseno-grafico-portafolio-publicidad?limitstart=0 Has the same content so I have duplicate pages the only problem is the ?limitstart=0 How can I fix this? Thanks in advance
Intermediate & Advanced SEO | | kuavicrea0 -
Is this will post Duplicated Content
I have domain let say abcshoesonlinestore.com and inside pages of this abcshoesonlinestore.com is ranking very well such as affiliate page, knowledgebase page and other pages, HOWEVER i would like to change my home page and product page to shorter url which abcshoes.com and keep those inside page like www.abashoesonlinestore.com/affiliate or www.abcshoesonlinestore.com/knowledgebase as it is - will this pose duplicate content? This is my plan to do it: the home page and product page will be www.abcshoes.com and when people click www.abcshoes.com/affiliate it will redirect 301 to abcshoesonlinestore.com/affiliate HOWEVER if someone type abcshoesonlinestore.com or abcshoesonlinestore.com/product it will redirect to abcshoes.com or its product page itself (i want to use 302 instead 301 (ASSUMING if the homapage or product page have manual penalization or anything bad we want to leave it behind and start fresh JUST assume because i read some post that 301 will carry any bad thing to new site too) The reason i do not want to 301 from abcshoesonlinestore.com to abcshoes.com is because those many pages is ranking top 3 in GOOGLE ( i worry will lose this ranking since this bringing traffic for us) Is this good idea or bad idea or any better idea or should i try to see the outcome 🙂 - the only concern is from abcshoesonlinestore.com to abcshoes.com will pose as duplicate content if i do not use 301 - or can i use google webmaster tools to remove the home page and product page for abcshoesonlinestore.com can we tell google that? PS: (home page and product page will have new revise content and minor design change) but inside page will stay the same design Please give me some advise
Intermediate & Advanced SEO | | owen20110 -
Affiliate Site Duplicate Content Question
Hi Guys I have been un-able to find a definite answer to this on various forums, your views on this will be very valuable. I am doing a few Amazon affiliate sites and will be pulling in product data from Amazon via a Wordpress plugin. The plugin pulls in titles, descriptions, images, prices etc, however this presents a duplicate content issue and hence I can not publish the product pages with amazon descriptions. Due to the large number of products, it is not feasible to re-write all descriptions, but I plan re-write descriptions and titles for 50% of the products and publish then with “index, follow” attribute. However, for the other 50%, what would be the best way to handle them? Should I publish them as “noindex,follow”? **- Or is there another solution? Many thanks for your time.**
Intermediate & Advanced SEO | | SamBuck0 -
Google consolidating link juice on duplicate content pages
I've observed some strange findings on a website I am diagnosing and it has led me to a possible theory that seems to fly in the face of a lot of thinking: My theory is:
Intermediate & Advanced SEO | | James77
When google see's several duplicate content pages on a website, and decides to just show one version of the page, it at the same time agrigates the link juice pointing to all the duplicate pages, and ranks the 1 duplicate content page it decides to show as if all the link juice pointing to the duplicate versions were pointing to the 1 version. EG
Link X -> Duplicate Page A
Link Y -> Duplicate Page B Google decides Duplicate Page A is the one that is most important and applies the following formula to decide its rank. Link X + Link Y (Minus some dampening factor) -> Page A I came up with the idea after I seem to have reverse engineered this - IE the website I was trying to sort out for a client had this duplicate content, issue, so we decided to put unique content on Page A and Page B (not just one page like this but many). Bizarrely after about a week, all the Page A's dropped in rankings - indicating a possibility that the old link consolidation, may have been re-correctly associated with the two pages, so now Page A would only be getting Link Value X. Has anyone got any test/analysis to support or refute this??0