Next Gen Gallery Crawler Problem
-
I use the Next gen gallery plugin on my wordpress sites.
The moz crawler reports a ton of high importance issues with this plugin because it creates duplicate pages
It will have domain.com/page, domain.com/page/gallery, domain.com/page/gallery/1/, domain.com/page/gallery/2/
This is a pretty popular plugin so I am hoping there is some way of fixing this relatively easy. I can imagine i need to set up a rel canonical but there does not seem to be an easy way to do so.
Thoughts?
-
Ill look again. Thanks so much for the help
-
You may want to double check that. I'm not a Yoast specialist, per se, but I've never had that plugin not solve this type of problem. And from what you've described, I'm sure Yoast can help with that. It's a little complex, maybe take a second look-through with a developer.
-
Already tried that plugin and it did not have this functionality.
-
Download the Yoast SEO plugin. It'll help you solve nearly all problems. Hands down the best WP plugin for SEO.
-
I understand that. My problem is trying to implement it within the plugin that I did not develop. Since it is a vastly popular plugin I was more so looking to see if anybody knows of how to do so already. It does not give me access to "page 1 gallery" "page 2 gallery" for me to put a rel canonical in place
-
Yes, if /page/gallery/1/, /page/gallery/2/ are exact duplicates just place a canonical tag on the /page/gallery (and the duplicates) referencing /page/gallery.
Another recommendation to avoid duplicate/confusion by search engines: apply rel="next" and rel="prev" to the series of gallery pages. Once you get enough images, I'm guessing the URL structure will become /page/gallery/page_2, 3, 4, etc. (or something like that).
The next/prev elements let the search engine know they are a series of a type of content (in this case, images) and are associated.
That type of content can be real headaches for us SEOs! Good luck.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to solve our duplicate content issue? (Possible Session ID problem)
Hi there, We've recently took on a new developer who has no experience in any technical SEO and we're currently redesigning our site www.mrnutcase.com. Our old developer was up to speed on his SEO and any technical issues we never really had to worry about. I'm using Moz as a tool to go through crawl errors on an ad-hoc basis. I've noticed just now that we're recording a huge amount of duplicate content errors ever since the redesign commenced (amongst other errors)! For example, the following page is duplicated 100s of times: https://www.mrnutcase.com/en-US/designer/?CaseID=1128599&CollageID=21&ProductValue=2293 https://www.mrnutcase.com/en-US/designer/?CaseID=1128735&CollageID=21&ProductValue=3387 https://www.mrnutcase.com/en-GB/designer/?CaseID=1128510&CollageID=21&ProductValue=3364 https://www.mrnutcase.com/en-GB/designer/?CaseID=1128511&CollageID=21&ProductValue=3363 etc etc. Does anyone know how I should be dealing with this problem? And is this something that needs to be fixed urgently? This problem has never happened before so i'm hoping it's an easy enough fix. Look forward to your responses and greatly appreciate the help. Many thanks, Danny
Intermediate & Advanced SEO | | DannyNutcase0 -
Ranking problems with international website
Hey there, we have some ranking issues with our international website. It would be great if any of you could share their thoughts on that. The website uses subfolders for country and language (i.e. .com/uk/en) for the website of the UK branch in English. As the company has branches all over the world and also offers their content in many languages the url structure is quite complex. A recent problem we have seen is that in certain markets the website is not ranking with the correct country. Especially in the UK and the US, Google prefers the country subfolder for Ghana (.com/gh/en) over the .com/us/en and .com/uk/en versions. We have hreflang setup and should also have some local backlinks pointing to the correct subfolders as we switched from many ccTLDs to one gTLD. What confuses me is that when I check for incoming links (Links to your site) with GWT, the subfolder (.com/gh/en) is listed quite high in the column (Your most linked content). However the listed linking domains are not linking at all to this folder as far as I am aware. If I check them with a redirect checker they all link to different subfolders. So I have now idea why Google gives such high authority to this subfolder over the specific country subfolders. The content is pretty much identical at this stage. Has any of you experienced similar behaviour and could point me in a promising direction? Thanks a lot. Regards, Jochen
Intermediate & Advanced SEO | | Online-Marketing-Guy0 -
301 redirects broken - problems - please help!
Hi, I have a bit of an issue... Around a year ago we launched a new company. This company was launched out of a trading style of another company owned by our parent group (the trading style no longer exists). We used a lot of the content from the old trading style website, carefully mapping page-to-page 301 redirects, using the change of address tool in webmaster tools and generally did a good job of it. The reason I know we did a good job is that although we lost some traffic in the month we rebranded, we didn't lose rankings. We have since gained traffic exponentially and have managed to increase our organic traffic by over 200% over the last year. All well and good. However, a mistake has recently occurred whereby the old trading style website domain was deleted from the server for a period of around 2-3 weeks. It has since been reinstated. Since then, although we haven't lost rankings for the keywords we track I can see in webmaster tools that a number of our pages have been deindexed (around 100+). It has been suggested that we put the old homepage back up, and include a link to the XML sitemap to get Google to recrawl the old URLs and reinstate our 301 redirects. I'm OK with this (up to a point - personally I don't think it's an elegant solution) however I always thought you didn't need a link to the xml sitemap from the website and that the crawlers should just find it? Our current plan is not to put the homepage up exactly as it was (I don't believe this would make good business sense given that the company no longer exists), but to make it live with an explanation that the website has moved to a different domain with a big old button pointing to the new site. I'm wondering if we also need a button to the xml sitemap or not? I know I can put a sitemap link in the robots file, but I wonder if that would be enough for Google to find it? Any insights would be greatly appreciated. Thank you, Amelia
Intermediate & Advanced SEO | | CommT0 -
Pages with rel "next"/"prev" still crawling as duplicate?
Howdy! I have a site that is crawling as "duplicate content pages" that is really just pagination. The rel next/prev is in place and done correctly but Roger Bot and Google are both showing duplicated content + duplicate page titles & meta's respectively. The only thing I can think of is we have a canonical pointing back at the URL you are on - we do not have a view all option right now and would not feel comfortable recommending it given the speed implications and size of their catalog. Any experience, recommendations here? Something to be worried about? /collections/all?page=15"/>
Intermediate & Advanced SEO | | paul-bold0 -
Big problem with duplicate page content
Hello! I am a beginner SEO specialist and a have a problem with duplicate pages content. The site I'm working on is an online shop made with Prestashop. The moz crawl report shows me that I have over 4000 duplicate page content. Two weeks ago I had 1400. The majority of links that show duplicate content looks like bellow:
Intermediate & Advanced SEO | | ana_g
http://www.sitename.com/category-name/filter1
http://www.sitename.com/category-name/filter1/filter2 Firstly, I thought that the filtres don't work. But, when I browse the site and I test it, I see that the filters are working and generate links like bellow:
http://www.sitename.com/category-name#/filter1
http://www.sitename.com/category-name#/filter1/filter2 The links without the # do not work; it messes up with the filters.
Why are the pages indexed without the #, thus generating me duplicate content?
How can I fix the issues?
Thank you very much!0 -
What next to help with my rankings
I'm after a fresh set of eyes and any suggestions to help me with my site on what next I should be doing to help increase rankings. The site is: http://bit.ly/VR6xIm Currently the site is ranking around 9-11th on google.co.uk for it's main term which is the name of the site. The site is around a year old, when it launched it went initially up towards positions 3-5 but has since settled at around where it is now. I have a free tool webmasters can use to implement our speed test into their sites which also includes a link back to our site in it to recognise that we are providing the tool for free, I periodically change the link achor text so it is not always the same anchor text that every site uses. Is there anything obvious I should be doing or that is missing that would help with my rankings? *Just as a note, I am not after a review on the actual speed test on the site, a new one will be developed to help further increase accuracy.
Intermediate & Advanced SEO | | Wardy0 -
Pagination with rel=“next” and rel=“prev”
Hey mozzers Would be interested to know if anyone has used the rel=“next” and rel=“prev” attributes more info here http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html If you have used it, has it worked and what are your thoughts etc:? And for those that have used it, is it a better way of handling pagination other than the obvious of Google saying so. Thanks
Intermediate & Advanced SEO | | CraigAddyman0 -
How long does a Google penalty last if you have fixed the problem??
Hi I stupidly thought that it would be a good idea to set up a reciprocal links page on my website named 'links'. I did this because my competitors were linking to these pages so I though it would be a good idea and I genuinely didn't know that you could be punished for this. Within about 3 weeks my rank dropped about 3 pages. I have since removed the links and the page was cached last Friday but the site still appears to have a penalty. I assumed when Google cached the page and saw the links were not there anymore that the penalty would be lifted. Anyone got any ideas? ps. The competitor websites had broken their links pages into various categories relating to the website i.e. related directories etc. so this might be why they weren't penalized.
Intermediate & Advanced SEO | | BelfastSEO0