Thanks a lot for your reply Stephan!
I would be super intertesting to read a little more around the subject. Do you have any studies or cases you might refer me to which describe the flow of link equity to "page C" from "Page A"?
Many thanks
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Thanks a lot for your reply Stephan!
I would be super intertesting to read a little more around the subject. Do you have any studies or cases you might refer me to which describe the flow of link equity to "page C" from "Page A"?
Many thanks
Hi Mozzers,
I was musing about rel=canonical this morning and it occurred to me that I didnt have a good answer to the following question:
I am thinking of whether those links would get counted twice, or in the case of ver-near-duplicates which may have an extra sentence which includes an extra link, whther that extra link would count towards the internal link graph or not.
I suspect that google would basically ignore all the content on page A and only look to page B taking into account only page Bs links.
Any thoughts?
Thanks!
Hi Carl,
There is no SEO advatage to using one of the (not-so) new TLDs. Theoretically there is no SEO dis-advantage either.
John Mueller made Google's position on this quite clear back in the day-
https://webmasters.googleblog.com/2015/07/googles-handling-of-new-top-level.html
Essentially, John states that new TLDs are like old TLDs- they are simply TLDs. Having keywords in the TLD presents no advantage. Having place names in the TLD give nos geographic emphasis.
Personally, I feel there is a bit of a UX issue with these new TLDs in the sense that users are not accustomed to them, and as a result don't feel to comfortable using them. From that point of view I think they represent a significant disadvantage. If I were to see a www.example.clothing printed on a business card, I think I would have to do a double-take before understanding it is a website, and I work in the industry. Maybe thats just me...
Hope that helps!
Hi Tertiary Education,
From the way you describe it your pages are being considered duplicate content becuase they are so very similiar, that is to say that they are near-duplicates if not actual duplicates.
It sounds like the content on those pages is somewhat thin- is there anything else on the page apart from the map? If the answer is "no", or "not much", then consider padding them out with (quality) content that serves your audience. I think this is what Patrick is referring to with his suggestions (forgive me if I am mis-interpreting you Patrick!), and I think he is absolutely right. Other things to consider as creating unique title and descriptions for each page (I mention it in case you have over looked it).
By adding valuable differentiated, content to those pages you will be differentiating one from the other as well as providing value to your users. As Moz sees these changes it should stop flagging them as duplicates, and as google re-indexes the pages it will begin to appreciate the differences between them and gradually index them as non-duplicate content.
Hope that helps