Duplicate title while setting canonical tag.
-
Hi Moz Fan,
My websites - https://finance.rabbit.co.th/ has run financial service, So our main keywords is about "Insurance" in Thai, But today I have an issues regarding to carnonical tag.
We have a link that containing by https://finance.rabbit.co.th/car-insurance?showForm=1&brand_id=9&model_id=18&car_submodel_id=30&ci_source_id=rabbit.co.th&car_year=2014 and setting canonical to this url - https://finance.rabbit.co.th/car-insurance within 5,000 items. But in this case I have an warning by site audit tools as Duplicate Page Title (Canonical), So is that possible to drop our ranking.
What should we do, setting No-Index, No-Follow for all URL that begin with ? or keep them like that.
-
Using the disallow directive in the robots.txt file is probably the better bet as far as making sure that our tools don't crawl those pages and report duplicate page titles. I think the disallow directive is the way to go!
That said, I'm not an SEO expert, so it might be worth checking in with a web developer to see if they have different suggestions.
-
Thanks for you guys and sorry for lately replied,
@tawnycase, I need to setting robot to ignore those link right ?, So in this case it must setting by dissallow ?parameter because I don't need to setting no index for main folders.
-
Hi there! Tawny from the Help Team here.
Even with a NoIndex, NoFollow tag on those pages, our tools will still crawl and report on everything up to that tag and report on it. The best way to prevent our crawler from accessing these dynamically tagged pages would be to block it from accessing them using the disallow directive in your robots.txt file. It would look something like this:
User-agent: Rogerbot
Disallow: ?showFormetc., until you have blocked all of the parameters or tags that may be causing these errors. You can also use the wild card user-agent * in order to block all crawlers from those pages, if you prefer.
Here is a great resource about the robots.txt file that might be helpful: https://moz.com/learn/seo/robotstxt
I hope this helps! -
You'll definitely want to keep that canonical tag in place. Some tools don't recognize canonicals, so I wouldn't worry too much about duplicate notifications due to parameters like that. If you noindex that page, it will apply to the root of that URL, not strictly the parameter'd version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages being flagged in Search Console as having a "no-index" tag, do not have a meta robots tag??
Hi, I am running a technical audit on a site which is causing me a few issues. The site is small and awkwardly built using lots of JS, animations and dynamic URL extensions (bit of a nightmare). I can see that it has only 5 pages being indexed in Google despite having over 25 pages submitted to Google via the sitemap in Search Console. The beta Search Console is telling me that there are 23 Urls marked with a 'noindex' tag, however when i go to view the page source and check the code of these pages, there are no meta robots tags at all - I have also checked the robots.txt file. Also, both Screaming Frog and Deep Crawl tools are failing to pick up these urls so i am a bit of a loss about how to find out whats going on. Inevitably i believe the creative agency who built the site had no idea about general website best practice, and that the dynamic url extensions may have something to do with the no-indexing. Any advice on this would be really appreciated. Are there any other ways of no-indexing pages which the dev / creative team might have implemented by accident? - What am i missing here? Thanks,
Technical SEO | | NickG-1230 -
Canonical URL Tag: Confusing Use Case
We have a webpage that changes content each evening at mid-night -- let's call this page URL /foo. This allows a user to bookmark URL /foo and obtain new content each day. In our case, the content on URL /foo for a given day is the same content that exists on another URL on our website. Let's say the content for November 5th is URL /nov05, November 6th is /nov06 and so on. This means on November 5th, there are two pages on the website that have almost identical content -- namely /foo and /nov05. This is likely a duplication of content violation in the view of some search engines. Is the Canonical URL Tag designed to be used in this situation? The page /nov05 is the permanent page containing the content for the day on the website. This means page /nov05 should have a Canonical Tag that points to itself and /foo should have a Canonical Tag that points to /nov05. Correct? Now here is my problem. The page at URL /foo is the fourth highest page authority on our 2,000+ page website. URL /foo is a key part of the marketing strategy for the website. It has the second largest number of External Links second only to our home page. I must tell you that I'm concerned about using a Cononical URL Tag that points away from the URL /foo to a permanent page on the website like /nov05. I can think of a lot of things negative things that could happen to the rankings of the page by making a change like this and I am not sure what we would gain. Right now /foo has a Canonical URL Tag that points to itself. Does anyone believe we should change this? If so, to what and why? Thanks for helping me think this through! Greg
Technical SEO | | GregSims0 -
Duplicate content issue with Wordpress tags?
Would Google really discount duplicate content created by Wordpress tags? I find it hard to believe considering tags are on and indexed by default and the vast majority of users would not know to deindex them . . .
Technical SEO | | BlueLinkERP0 -
Wordpress: Tags generate duplicate Content - just delete the tags!?
Asking people, they say tags are bad and spamy and as I can see they generate all my duplicate page content issues. So the big question is, why Google very often prefers to show in SERPS these Tag-URLS... so it can't be too bad! :)))? Then after some research I found the "Term Optimizer" on Yoast.com ... that should help exactly with this problem but it seems not to be available anymore? So may be there another plugin that can help... or just delete all tags from my blog? and install permanent redirects?
Technical SEO | | inlinear
Is this the solution?0 -
Changing title tag in wordpress media pages
Hello! I have a problem of duplicate title on 59 pages in Worpress. I Guess it happened after a recent WP update, because, suddenly, I got a peak of 59 from zero in a day. The SeoMoz crawl report states that there are 59 duplicate title. As you may see in the picture those are all media pages, whose title is written really badly with the formula postname/filename/blogTitle, ending up with trunkated, too long, title tags, that result in duplicate page title. How can I simply hide these mediapages or sculpting the title tag the way I want? I am using all-in-one-seo WP plugin, which doesn't seem to provide a solution. Thank you all! DoMiSol Rossini SOwZg5C.jpg
Technical SEO | | DoMiSoL0 -
We have over 3000 duplicate page titles, please help!
Hi, we did a crawl report and have over 3000 duplicate page titles. I'm not sure why this is happening... could it be because we have put posts in multiple categories? Can anyone help us with a quick fix? our site is www.stayathomemum.com.au thank you kindly, Chris
Technical SEO | | stayathomemum0 -
A week ago I asked how to remove duplicate files and duplicate titles
Three weeks ago we had a very large number of site errors revealed by crawl diagostics. These errors related purely to the presence of both http://domain name and http://www.domain name. We used the rel canonical tag in the head of our index page to direct all to the www. preference, and we have no improvement. Matters got worse two weeks ago and I checked with Google Webmaster and found that Google had somehow lost our preference choice. A week ago I asked how to overcome this problem and received good advice about how to re-enter our preference for the www.tag with Google. This we did and it was accepted. We aso submitted a new sitemap.xml which was also acceptable to Google. Today, a week later we find that we have even more duplicate content (over 10,000 duplicate errors) showing up in the latest diagnostic crawl. Does anyone have any ideas? (Getting a bit desperate.)
Technical SEO | | FFTCOUK0