I don't know which CMS/framework you use but it's unlikely you can set canonical at template level.
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Posts made by max.favilli
-
RE: Dynamic contents causes duplicate pages
-
RE: Dynamic contents causes duplicate pages
Moz doesn't pick, just signal the duplicate content. If you add the canonical Moz should understand and un flag it as duplicate.
-
RE: Does link equity still count after an expired domain is purchased?
That is a very common and old gray hat/black hat technique.
You buy an expired domain with a good backlink profile from godaddy auctions, or some other similar website. There's few online services screening expired domains and offering you directories of them filtered by topics/DA/PR etc...
Once you bought the domain, let's say with a DA of 50 you can just 301 redirect the domain to your website or build some content and link to your website to pass juice.
The problem is google doesn't consider that legit. In both cases google algo have been instructed to discount the value of the juice passed, because it does detect the change of ownership and more important the change of content.
But it may still work.
The cleanest way of doing it is to replicate the content after you bought the domain. You buy foo.com, you download the old content from web.archive.org and keep serving it, then start to add content targetin the keywork you are after and linking to your domain. Doing that way google usually doesn't notice the change and doesn't discount the juice value.
Is that what your competitor is doing?
-
RE: Dynamic contents causes duplicate pages
- You can just add a canonical tag to signal google which version is the one to index.
- Or add meta "noidex"to the versions you do not want to be indexed.
- Or just do nothing and let google pick his preferred version for its index.
- Or you can teach google what those url parameter do and instruct him to do not index those versions.
I would programmatically add meta noindex.
-
RE: DNS vs IIS redirection
If you are not changing the IP address you don't need to change the DNS, if you change the IP address, in addition to updating the DNS records you also need to properly redirect traffic from old urls to new urls.
With IIS the best option is using url rewrite, which is very flexible but a little tricky to set up if it's the first time you do so: http://www.iis.net/learn/extensions/url-rewrite-module/creating-rewrite-rules-for-the-url-rewrite-module
Url rewrite does operate at web server level, its powerful and does the job, but you may consider doing redirects at application level, depending on the technology you use, php/dotnet/aspx/mvc you have different tools. The advantage of doing it at application level is you can redirect dynamically, in other words use an algo to translate the old urls to the new ones using whatever information is stored in the application cache, database, and so on. While using IIS url rewrite you either statically redirect each old url to a the new url or you use regular expressions or wildcards to dynamically do so. In other words using url rewrite you have a little less flexibility.
-
RE: Why google stubbornly keeps indexing my http urls instead of the https ones?
Thanks again Dirk! At the end I used xenu link sleuth and I am happy with the result.
-
RE: Why google stubbornly keeps indexing my http urls instead of the https ones?
Forgot to mention, yes I checked the scheme of the serp results for those pages, is not just google not displaying it, it really still have the http version indexed.
-
RE: Why google stubbornly keeps indexing my http urls instead of the https ones?
Hi DC,
in screaming frog I can see the old http links. Usually are manually inserted links and images in wordpress posts, I am more than eager to edit them, my problem is how to find all the pages containing them, in screaming frog I can see the links, but I don't see the referrer, in which page they are contained. Is there a way to see that in screaming frog, or in some other crawling software?
-
RE: Why google stubbornly keeps indexing my http urls instead of the https ones?
Mhhh, you are right theoretically could be the crawler budget. But if that is the case I should see that from the log, I should miss crawler visits on that page. Instead the crawler is happily visiting them.
By the way, how would you "force" the crawler to parse these pages?
I am going to check the sitemap now to remove that port number and try to split them. Thanks.
-
RE: Why google stubbornly keeps indexing my http urls instead of the https ones?
As far as I know the change of address for http to https doesn't work, the protocol is not accepted when you do a change of address. And somewhere I read google itself saying when moving to https you should not do a change of address.
But they suggest to add a new site for the https version in GWT, which I did, and in fact the traffic slowly transitioned from the http site to the https site in GWT in the weeks following the move.
-
Why google stubbornly keeps indexing my http urls instead of the https ones?
I moved everything to https in November, but there are plenty of pages which are still indexed by google as http instead of https, and I am wondering why.
Example: http://www.gomme-auto.it/pneumatici/barum correctly redirect permanently to https://www.gomme-auto.it/pneumatici/barum
Nevertheless if you search for pneumatici barum: https://www.google.it/search?q=pneumatici+barum&oq=pneumatici+barum
The third organic result listed is still http.
Since we moved to https google crawler visited that page tens of time, last one two days ago. But doesn't seems to care to update the protocol in google index.
Anyone knows why?
My concern is when I use API like semrush and ahrefs I have to do it twice to try both http and https, for a total of around 65k urls I waste a lot of my quota.
-
RE: Pages are Indexed but not Cached by Google. Why?
You are totally wrong guessing my path. You are going down a tunnel which doesn't have a exit. Personally I think, in this thread, you got some good advice about what you should focus on, so I would stop feeling in dismay, and confidently steer away from bad practices. Good luck.
-
RE: Pages are Indexed but not Cached by Google. Why?
First of all, I was just browsing and I got blocked as bot see below:
I would remove that cloaking.
Second, understanding your visitors behavior is one of the most complex task, you don't know your user behavior until you run a lot of test, survey and so on...
-
RE: Pages are Indexed but not Cached by Google. Why?
Well, then I totally agree with you, Ryan, thanks for the answer. With a DA of 1, you are absolutely right.
-
RE: Pages are Indexed but not Cached by Google. Why?
Let me say straight forward, all that bot blocking is not a good idea.
I have been there in the past few times, especially for e-commerce, scraping to compare prices is very common, and I tried blocking scrapers many times, maybe I am not that good, but at the end I gave up because the only thing I was able to do was annoy legitimate users, and legitimate bots.
I do scrape other website too for price comparison, tens of websites, since I don't want to be blocked I split the requests among different tasks, I add a random delay between each request, I fake header data like user agent pretending to be Firefox from a windows pc, and I cycle through different proxies to continuously change IP address.
So as you can see, it's much harder to block scrapers than it seems.
Neither I would use JS to block cut&paste. I have no data to base my judgement on. But it's annoying for users, it doesn't sound compliant with accessibility, it stinks and google usually doesn't like things which stinks. Plus... If someone wants to scrape your content you are not going to block him that way.
-
RE: Pages are Indexed but not Cached by Google. Why?
Ryan, I don't agree. It's true external factors (in other words backlinks) nowadays have the biggest impact, but on-page optimization as far as my little experience tell, still does affect ranking and it's worth working on.
And if we don't keep track of changes on pages and change on ranking how can we know what is working and what is not?
Especially since there's no gold rule and what works for one site doesn't necessarily work for another.
To make some example, I had a page which was ranking in position 1 for a search query with a volume of 50+k and very high competition. I expanded content to improve ranking for some additional queries, and it worked, it climbed from 2nd and 3rd serp page to 1st for a couple of those queries (I use both Moz ranktracker, semrush, and proracktracker to monitor ranking).
Unfortunately ranking for the search query with the highest volume moved from position 1 to postion 2, I changed the content a little bit, to add some keyword, which made sense because was re-balancing the keyword density now that the content was bigger. And in 24 hours it got back to position 1, without damaging the other search query improvement.
**In many other cases, I improved ranking on pages without any backlink, just improving the content, and I am talking about business critical pages with a high competition.
So I would say on-page optimization is still worth spending time on, to test the effect of the changes is a must and to monitor google ranking fluctuation is a must too.
Of course I am not saying off-page optimization is not important, is fundamental, I am giving that for granted.**