After adding a ssl certificate to my site I encountered problems with duplicate pages and page titles
-
Hey everyone!
After adding a ssl certificate to my site it seems that every page on my site has duplicated it's self. I think that is because it has combined the www.domainname.com and domainname.com. I would really hate to add a rel canonical to every page to solve this issue. I am sure there is another way but I am not sure how to do it. Has anyone else ran into this problem and if so how did you solve it?
Thanks and any and all ideas are very appreciated.
-
Hi Donford,
Excuse the delay. We have been very busy but we are working on updating our .htaccess file. Thank you for the quick response. We will let you know if it resolves our issues.
-
**`Don is right If on Nginx use`**
<code>server { server_name [example.com](http://example.com/); rewrite ^/(.*) [https://example.com/$1](https://example.com/$1) permanent; }</code>
<code>server { listen 80; server_name my.domain.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name my.domain.com; [....] }</code>
<code>server { listen 80; server_name yourdomain.com www.yourdomain.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl spdy; server_name yourdomain.com www.yourdomain.com; ssl on; ssl_certificate /var/www.yourdomain.com/cert/ssl-bundle.crt; ssl_certificate_key /var/www/yourdomain.com/cert/yourdomain.com.key; access_log /var/log/nginx/yourdomain.com.access.log rt_cache; error_log /var/log/nginx/yourdomain.com.error.log; root /var/www/yourdomain.com./htdocs; index index.php index.htm index.html; include common/wpfc.conf; include common/wpcommon.conf; include common/locations.conf; }</code>
`https://christiaanconover.com/blog/how-to-redirect-http-to-https-in-nginx`
https://moz.com/learn/seo/redirection
https://wp-mix.com/htaccess-redirect-http-to-https/
https://www.digitalocean.com/community/questions/http-https-redirect-positive-ssl-on-nginx
-
HI Travis
Yes, you can use your .htaccess file to fix this. (This file should be located in your home directory)
A simple example would be
RewriteEngine On
RewriteCond %{HTTP_HOST} ^YourDomain.com
RewriteRule ^(.*)$ http://www.YourDomain.com/$1 [R=301,L]What this does is looks for any request for non-www urls, if found it 301 redirects it to the www version of the page.
Hope this helps,
Don
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Adding Incorrect Location to the end of Title Tags in SERPs
I have an issue with the way Google is adding to a client’s Title Tag. Since we relaunched the website a few months ago, Google has been adding an indiscriminate “– London” to the end of title tags. That would be fine if the company was solely London based but we have stores outside London too, and it’s adding “– London” to the end of those individual store title tags there too. So, if you do a search for “location widget” our page title is:
Intermediate & Advanced SEO | | DrewDaviesLondon
“location widget | Brand name”
but then Google pops in:
“location widget | Brand name - London”
Which isn’t great if the location is in Scotland! We are adding structured data to the store pages to try and combat this, the store pages are all well optimised for the location (and ranking well), but I’m wondering if I’ve missed anything obvious? I thought it might lesson as the new site became more trusted in Google, but the rogue “London” seems to be increasing... Thanks for your help!0 -
Paginated Pages Page Depth
Hi Everyone, I was wondering how Google counts the page depth on paginated pages. DeepCrawl is showing our primary pages as being 6+ levels deep, but without the blog or with an infinite scroll on the /blog/ page, I believe it would be only 2 or 3 levels deep. Using Moz's blog as an example, is https://moz.com/blog?page=2 treated to be on the same level in terms of page depth as https://moz.com/blog? If so is it the https://site.comcom/blog" /> and https://site.com/blog?page=3" /> code that helps Google recognize this? Or does Google treat the page depth the same way that DeepCrawl is showing it with the blog posts on page 2 being +1 in page depth compared to the ones on page 1, for example? Thanks, Andy
Intermediate & Advanced SEO | | AndyRSB0 -
Making Filtered Search Results Pages Crawlable on an eCommerce Site
Hi Moz Community! Most of the category & sub-category pages on one of our client's ecommerce site are actually filtered internal search results pages. They can configure their CMS for these filtered cat/sub-cat pages to have unique meta titles & meta descriptions, but currently they can't apply custom H1s, URLs or breadcrumbs to filtered pages. We're debating whether 2 out of 5 areas for keyword optimization is enough for Google to crawl these pages and rank them for the keywords they are being optimized for, or if we really need three or more areas covered on these pages as well to make them truly crawlable (i.e. custom H1s, URLs and/or breadcrumbs)…what do you think? Thank you for your time & support, community!
Intermediate & Advanced SEO | | accpar0 -
Concerns of Duplicative Content on Purchased Site
Recently I purchased a site of 50+ DA (oldsite.com) that had been offline/404 for 9-12 months from the previous owner. The purchase included the domain and the content previously hosted on the domain. The backlink profile is 100% contextual and pristine. Upon purchasing the domain, I did the following: Rehosted the old site and content that had been down for 9-12 months on oldsite.com Allowed a week or two for indexation on oldsite.com Hosted the old content on my newsite.com and then performed 100+ contextual 301 redirects from the oldsite.com to newsite.com using direct and wild card htaccess rules Issued a Press Release declaring the acquisition of oldsite.com for newsite.com Performed a site "Change of Name" in Google from oldsite.com to newsite.com Performed a site "Site Move" in Bing/Yahoo from oldsite.com to newsite.com It's been close to a month and while organic traffic is growing gradually, it's not what I would expect from a domain with 700+ referring contextual domains. My current concern is around original attribution of content on oldsite.com shifting to scraper sites during the year or so that it was offline. For Example: Oldsite.com has full attribution prior to going offline Scraper sites scan site and repost content elsewhere (effort unsuccessful at time because google know original attribution) Oldsite.com goes offline Scraper sites continue hosting content Google loses consumer facing cache from oldsite.com (and potentially loses original attribution of content) Google reassigns original attribution to a scraper site Oldsite.com is hosted again and Google no longer remembers it's original attribution and thinks content is stolen Google then silently punished Oldsite.com and Newsite.com (which it is redirected to) QUESTIONS Does this sequence have any merit? Does Google keep track of original attribution after the content ceases to exist in Google's search cache? Are there any tools or ways to tell if you're being punished for content being posted else on the web even if you originally had attribution? Unrelated: Are there any other steps that are recommend for a Change of site as described above.
Intermediate & Advanced SEO | | PetSite0 -
Pagination causing duplicate content problems
Hi The pagination on our website www.offonhols.com is causing duplicate content problems. Is the best solution adding add rel=”prev” / “next# to the hrefs As now the pagination links at the bottom of the page are just http://offonhols.com/default.aspx?dp=1
Intermediate & Advanced SEO | | offonhols
http://offonhols.com/default.aspx?dp=2
http://offonhols.com/default.aspx?dp=3
etc0 -
External resources page (AKA a satellite site) - is it a good idea?
So the general view on satellite sites is that they're not worth it because of their low authority and the amount of link juice they provide. However, I have an idea that is slightly different to the standard satellite site model. A client's website is in a particular niche, but a lot of websites that I have identified for potential links are not interested because they are a private commercial company. Many are only interested in linking to charities or simple resource pages. I created a resource section on the website, but many are still unwilling to link to it as it is still part of a commercial website. The website is performing well and is banging on the door of page one for some really competitive keywords. A few more links would make a massive difference. One idea I have is to create a standalone resource website that links to our client's website. This would be easy to get links from sites that would flat out refuse to link to the main website. This would increase the authority of the resource and result in more link juice to the primary website. Now I know that the link juice from this website will not be as good as getting links directly to the primary website, but would it still be a good idea? Or would my time be better spent trying to get a handful of links directly to the client's website? Alternatively, I could set up a sub-domain to set up the resource, but I'm not sure that this would be as successful.
Intermediate & Advanced SEO | | maxweb0 -
Can use of the id attribute to anchor t text down a page cause page duplication issues?
I am producing a long glossary of terms and want to make it easier to jump down to various terms. I am using the<a id="anchor-text" ="" attribute="" so="" am="" appending="" #anchor-text="" to="" a="" url="" reach="" the="" correct="" spot<="" p=""></a> <a id="anchor-text" ="" attribute="" so="" am="" appending="" #anchor-text="" to="" a="" url="" reach="" the="" correct="" spot<="" p="">Does anyone know whether Google will pick this up as separate duplicate pages?</a> <a id="anchor-text" ="" attribute="" so="" am="" appending="" #anchor-text="" to="" a="" url="" reach="" the="" correct="" spot<="" p="">If so any ideas on what I can do? Apart from not do it to start with? I am thinking 301s won't work as I want the URL to work. And rel=canonical won't work as there is no actual page code to add it to. Many thanks for your help Wendy</a>
Intermediate & Advanced SEO | | Chammy0 -
Affiliate Links Added and Site Dropped in only Google
My site was dropshipping a product and we switched to an affiliate offer. We had three 4 links to different affiliate products. Our site dropped the next day. I have been number 1 for 6 months, has a pr 6 and is 2 years old. It has been 2 weeks and the site hasn't jumped back. Any suggestions on how to handle this?
Intermediate & Advanced SEO | | dkash0