1st Campaign - Advice please.
-
We have just run our first campaign for our site and have found over 5,335 errors!
It would appear that the majority of which are where the crawl has duplicated the product page with the "write a review - Tell a friend page"...hence a large number of errors.
In addition we also have over 5,000 302 warnings for the following URL:
URL: http://www.collarandcuff.co.uk/index.php?_a=login&redir=/index.php?_a=viewCat&catId=105
Please bear in mind we are fairly new to this type of data....so go easy on us.
In short, will these errors have a significant bearing on our rankings etc and if so how do we rectify?
Many thanks.
Tony
-
Hi Tony,
If the "Sign In" form is an element included on the page that you set to rel=canonical, the other instances of the sign in form should be neutralized (in terms of triggering duplicate content errors).
Usually, something as small as a sign in form doesn't constitute enough content to trigger the "duplicate" warning. The SE's algo has to account for certain elements that are useful on each page (for example, navigation bars). They are more concerned with people scraping large amounts of written content from other sites or just recycling large portions of their own site for SEO purposes.
-
Josh,
It appears that the errors may relate to the "Sign In" section of the page which for the record is available on every page of the site hence the number of errors. Would that have a bearing on the results and more importantly would that reduce the link juice?
-
Josh.
Thanks so much.
We use Cubecart v4...basic and i simple i know but it works for us.
Trust that helps.
Tony
-
Hi Tony,
Welcome to the world of SEO
I just spoke to someone who had a similar issue (duplicates due to user reviews). There is a relatively clean solution for this and it comes with a fancy name, "Canonicalization". Here is a great step by step for setting a page to rel="canonical".
Basically, you want to tell Google that there is one "source" page for all the duplicates.
Example:
You have a page for blue widgets. Users can review the blue widget, but each new review becomes a new page (problem). If you label the original product page as canonical, your duplicates will be ignored, and the Google bot will be much happier with your site
It's hard for me to tell how much the duplicate content is impacting your ranking right now, but after you use rel=canonical, you should see some major improvements within a couple weeks.
As for the 302 redirects...you want to fix this immediately! Here is the step by step for 301's.
There are some shortcuts to changing 301 redirects depending on your platform...do you happen to know what your development team is using? Changing 5,000 of these would be a little cumbersome to do by hand
Keep up the good work!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
PLEASE HELP - Old query string URL causing problems
For a long time, we were ranking 1st/2nd for the term "Manual handling training". That was until about 5 days ago when I realised that Google had started to index not only a query stringed URL, but also an old version of the URL. What was even weirder was that when you clicked on the result it 301 redirected to the page that it was meant to display... The wrong URL that Google had started to index was: www.ihasco.co.uk/courses/detail/manual-handling?channel=retail The correct URL that it should have been indexing is: https://www.ihasco.co.uk/courses/detail/manual-handling-training I can't get my head around why it has done this as a 301 was in place already and we use rel canonical tags which point to the main parent pages. Anyway, we slapped a noindex tag in our robots.txt file to stop that page from being indexed, which worked but now I can't get the correct page to be indexed, even after a Google fetch. After inspecting the correct URL in the new search console I discovered that Google has ignored the rel canonical on the page (Which points to itself) and has selected the wrong, query stringed URL as the canonical. Why? and how do I rectify this?
Intermediate & Advanced SEO | | iHasco1 -
[Advice] Dealing with an immense URl structure full of canonicals with Budget & Time constraint
Good day to you Mozers, I have a website that sells a certain product online and, once bought, is specifically delivered to a point of sale where the client's car gets serviced. This website has a shop, products and informational pages that are duplicated by the number of physical PoS. The organizational decision was that every PoS were supposed to have their own little site that could be managed and modified. Examples are: Every PoS could have a different price on their product Some of them have services available and some may have fewer, but the content on these service page doesn't change. I get over a million URls that are, supposedly, all treated with canonical tags to their respective main page. The reason I use "supposedly" is because verifying the logic they used behind canonicals is proving to be a headache, but I know and I've seen a lot of these pages using the tag. i.e: https:mysite.com/shop/ <-- https:mysite.com/pointofsale-b/shop https:mysite.com/shop/productA <-- https:mysite.com/pointofsale-b/shop/productA The problem is that I have over a million URl that are crawled, when really I may have less than a tenth of them that have organic trafic potential. Question is:
Intermediate & Advanced SEO | | Charles-O
For products, I know I should tell them to put the URl as close to the root as possible and dynamically change the price according to the PoS the end-user chooses. Or even redirect all shops to the main one and only use that one. I need a short term solution to test/show if it is worth investing in development and correct all these useless duplicate pages. Should I use Robots.txt and block off parts of the site I do not want Google to waste his time on? I am worried about: Indexation, Accessibility and crawl budget being wasted. Thank you in advance,1 -
301 redirects broken - problems - please help!
Hi, I have a bit of an issue... Around a year ago we launched a new company. This company was launched out of a trading style of another company owned by our parent group (the trading style no longer exists). We used a lot of the content from the old trading style website, carefully mapping page-to-page 301 redirects, using the change of address tool in webmaster tools and generally did a good job of it. The reason I know we did a good job is that although we lost some traffic in the month we rebranded, we didn't lose rankings. We have since gained traffic exponentially and have managed to increase our organic traffic by over 200% over the last year. All well and good. However, a mistake has recently occurred whereby the old trading style website domain was deleted from the server for a period of around 2-3 weeks. It has since been reinstated. Since then, although we haven't lost rankings for the keywords we track I can see in webmaster tools that a number of our pages have been deindexed (around 100+). It has been suggested that we put the old homepage back up, and include a link to the XML sitemap to get Google to recrawl the old URLs and reinstate our 301 redirects. I'm OK with this (up to a point - personally I don't think it's an elegant solution) however I always thought you didn't need a link to the xml sitemap from the website and that the crawlers should just find it? Our current plan is not to put the homepage up exactly as it was (I don't believe this would make good business sense given that the company no longer exists), but to make it live with an explanation that the website has moved to a different domain with a big old button pointing to the new site. I'm wondering if we also need a button to the xml sitemap or not? I know I can put a sitemap link in the robots file, but I wonder if that would be enough for Google to find it? Any insights would be greatly appreciated. Thank you, Amelia
Intermediate & Advanced SEO | | CommT0 -
Ranking 1st on Google, but not in top 50 on Bing and Yahoo?
Hi Mozzers, Roughly 2 weeks ago we were ranked:
Intermediate & Advanced SEO | | Travis-W
#2 on Google for "African American Business Owner Mailing Lists"
#2 on Bing
#2 on Yahoo Now we are ranked
#1 on Google #50 on Bing
#50 on Yahoo I noticed a lot of our other keywords improved on Google during this period but vanished from the other 2 search engines. Other KWs include
"Apartment Owner Mailing Lists " (#4 on Google)
"Community College Mailing Lists (#3 on Google)
etc. What gives?
Thoughts?0 -
Please Review and Advice!
My site is WordPress Solution Please expert give me some guideline how can i improve my Website's SEO
Intermediate & Advanced SEO | | shakiel0 -
.htaccess question/opinion/advice needed
Hello, I am trying to achieve 3 different things on my .htaccess I just want to make sure I am doing it the right or best way because I don't have much experience working on this kind of files. I am trying to: a) Redirect www.mysite.com/index.html to www.mysite.com so I don't get a duplicate content/tag error. b) Redirect mysite.com to www.mysite.com c) Get rid of the file extensions; www.mysite.com/stuff.html to www.mysite.com/stuff This is the code that I'm currently using and it seems to work fine, however I would like someone with experience to take a look so I can avoid internal server errors and other kinds of issues. I grabbed each piece of code from different posts and tutorials. Options +FollowSymlinks
Intermediate & Advanced SEO | | Eblan
RewriteEngine on Index Rewrite RewriteRule ^index.(htm|html|php) http://www.mysite.com/ [R=301,L] RewriteRule ^(.*)/index.(htm|html|php) http://www.mysite.com/$1/ [R=301,L] RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.html -f
RewriteRule ^(.*)$ $1.html Options +FollowSymlinks
RewriteEngine on
Rewritecond %{http_host} ^mysite.com [nc]
Rewriterule ^(.*)$ http://www.mysite.com/$1 [r=301,nc] Thanks a lot!0 -
Can someone please help me understand my sites recent loss of rankings?
My site has been top 3 for 'speed dating' on Google.co.uk since about 2003 and it went to below top 50 for a lot of it's main keywords shortly after 27 Oct 2012. I did a re-submission request and was told there was 'no manual spam action'. I have a Page Authority of 53, a regular blog http://bit.ly/oKyi88, a KLOUT of 40, user reviews and quality content. I did discover that another URL I using was set to a 302 instead of a 301 for some reason. I don't necessarily think this was an issue as Google should know which is the trusted URL and therefore which content to list. I removed this redirect completely about 3 weeks ago, but I've seen no improvement. I'm looking at improving various things, but I'm still not sure why I've been hit and wonder if I'm missing something obvious? Any suggestions greatly appreciated.
Intermediate & Advanced SEO | | benners0 -
Website gone from SERPs - please help to understand why
Dear SeoMozers, My website www.buy-hosting.net ist around for about 6 months. It performed quiet well the first months and for the main keyword "Buy Hosting" it went continously better, until #7 in SERPS of Google. Then, some days ago, it suddenly disappeared and traffic went down to nearly zero. It is not even in the Top 100 for "buy hosting" anymore. Can anyone please advice me, what could be the reason for that and what I could do about it? I'am desperate, beacuse I worked about 8 months nearly 100% of my time on that project... Thank you and kind regards
Intermediate & Advanced SEO | | ie4mac0