Duplicate Page Title error for an eCommerce store !!
-
I currently launched my eCommerce startup hosted in Shopify and linked with MOZ. From my first Crawl Report I am getting 580 Duplicate Page Title i.e. all my Collection page have the same title. I have googled and have been checking the MOZ community but cannot find a fix to it. Some of the URL's are -
http://www.onlypetstore.com/collections/all
http://www.onlypetstore.com/collections/all?page=10
http://www.onlypetstore.com/collections/all?page=100
http://www.onlypetstore.com/collections/all?page=101
http://www.onlypetstore.com/collections/all?page=102
http://www.onlypetstore.com/collections/all?page=103
I am new to SEO and any suggestions will be a great help to me.
-
Actually I would suggest something totally different. What platform are you on?
Send a variable to the title tag that prints out the page number to the title tag. If you are on page 1 have no page number printed out. That will solve the issue. If you happen to be using Prestashop something like this would work perfect for you, http://www.presto-changeo.com/en/prestashop-modules/25-duplicate-url-redirect.html it handles that.
-
ok, then something else is wrong... i will check that as well but my office hours will end now... will be back tommorrow... hold on
-
Thank you for your answers.
I am getting 6 Duplicate Page Content and 527 Duplicate Page Title errors. I did do what you told me to rel=canonical for the first page but will check that part again.
-
duplicate content is often closely connected to a duplicate page title - but mainly not with shops
how many duplicate content errors do you have?
You have to imagine, that you always present the same title according to the fact, that you`ve several sites to let your customers scroll through. Tell the Google Bot, which url he should index by using rel=canonical
Together with rel="next" & rel="prev" this should solve the issue
Have you done that?
-
The pagination part has been done like that to facilitate infinite scrolling.
Can you tell me where to start or just a brief on what need to be done rest I will dig it up.
-
you have also a pagination problem... take a closer look at rel="prev" & rel="next" methode
I am sorry but it
s a huge topic... this can
t be answered shortly without avoiding mistakes in detailed explanations
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
API for On Page tool
I'm looking for a tool similar to On Page Grader (Moz) or Focus Keyword (Yoast) with API. We are building out or internal CRM system. Even though none of these tools can replace manual on page analysis, it will be used as a metric and to catch human mistakes.
Moz Pro | | OscarSE0 -
When Is Page Optimization Section Going To Be Fixed?
In the last weekly reports, the page optimization has not been working correctly. When Is Page Optimization Section Going To Be Fixed? Please let us know. Thank you.
Moz Pro | | Videogamefan0 -
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
Why SEOmoz bot consider these as duplicate pages?
Hello here, SEOmoz bot has recently marked the following two pages as duplicate: http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=mp3 http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=pdf I don't personally see how these pages can be considered duplicate since their content is quite different. Thoughts??!!
Moz Pro | | fablau0 -
Does the page authority data also considers the on page factors like the presence of keyword in the title,meta text, and keyword frequency ??
The moz difficulty score considers four factors for the top websites. are the on page factors included in the page authority data ?
Moz Pro | | iQuanti0 -
Duplicate page content due to Sort By dropdown
Hi there, I have over 150 Duplicate Page Title errors showing up in SEOMoz but on closer inspection these are related to the 'Sort By:' functionality on our ecommerce site that allows customers to sort our products by Price, Alphabetically etc. To give an example: http://www.parklanechampagne.co.uk/park-lane-champagne/special-occasions/easter Is showing as being duplicated by this page: http://www.parklanechampagne.co.uk/park-lane-champagne/special-occasions/easter?productlisting_page=1&sortorder=Price Does anyone know how I can resolve this? Any help greatly appreciated. Kind regards, Jon CDFyp.jpg
Moz Pro | | jonmorse860 -
On Page missing keywords
I setup my keywords on SEOMoz properly but the On Page result just shows me 2 keywords instead of the 7 that I set for my campaign. I was expecting the application to score the other keywords on wednesday but it did not add the missing keywords. Is this a bug?
Moz Pro | | netbuilder0 -
Reducing duplicate content
Callcatalog.com is a complaint directory for phone numbers. People post information on the phone calls they get. Since there are many many phone numbers, obviously people haven't posted information on ALL of the phone numbers, THUS I have many phone numbers with zero content. SEOMoz is telling me that pages with zero content looks like duplicate content with each other.. The only difference between two pages that have zero coments is the title and phone number embedded in the page. For example, http://www.callcatalog.com/phones/view/413-563-3263 is a page that has zero comments.. I don't want to remove these zero comment phone number pages from the directory since many people find the pages via a phone number search. Here's my question: what can I do to make google / seomoz think that thexe zero comment pages is not dupliicate content?
Moz Pro | | seo_ploom0