If its relevant, done with usability in mind, and is not deceptive then it should be fine.
Here's a related Article from Search Engine Roundtable with Matt Cutts video:
http://www.seroundtable.com/google-hiding-content-17136.html
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
If its relevant, done with usability in mind, and is not deceptive then it should be fine.
Here's a related Article from Search Engine Roundtable with Matt Cutts video:
http://www.seroundtable.com/google-hiding-content-17136.html
Their coder will be taking a look at it when he's freed up the time to see if there's a way to do that. Hopefully its something easy like that and doesn't require numerous workarounds to get going.
I'm working with someone to make fixes to an xcart site but I'm at a loss for some fixes. Some directory URLs had been changed around on their ecommerce site to make them more descriptive & more human friendly. The problem is that according to the team's coder, simple redirects won't work for the directories and mod rewrite and redirectmatch didn't work for some unknown reason.
I don't really know anything about xcart. I've made some basic changes and redirects before though their admin panel but I don't have any clue as to how to make directories 301 properly. Any insights? Thanks!
Just because I'm not sure if I'm reading this correctly or because its Friday & my brain is misfiring... Did you place a canonical on www.domain.com/Product123 pointing at the lowercase AND then 301 redirect it to the lowercase? Because if that's the case then it would really only pick up the 301.
They can be used together in this fashion without any problems. The 301 is redirecting duplicate content that does not need to physically exist and is better served by another page. The Canonical "redirects" the bots from a page that needs to exist for a specific purpose (tracking tag, model id, product id, etc.) but which is a duplicate or subset of another page that should be given the proper ranking signals in place of the page with the variable.
Edit: As to the second question, don't worry. They will naturally change over to the correct page(s) over time as long as Google chooses to follow the canonical tag and consider the page it is pointing to as proper/relevant. In the meantime, the 301s will bring people to the proper place and the canonicals should be passing signals/equity to the proper pages.
Here's a some info from Matt Cutts on a similar situation someone had back in December that may help to allay some of your worries about a penalty persisting from a long unregistered domain.
http://www.seroundtable.com/google-old-penalties-expired-domain-17883.html
https://productforums.google.com/forum/#!topic/webmasters/H9-kbSf8r4w/discussion
I'm going to refer you back to the other two questions you asked today about the same thing with the same graphs that already have a bunch of answers in them.
http://moz.com/community/q/what-penalty-would-cause-this-traffic-drop-google-analytic-screenshot
http://moz.com/community/q/does-this-graph-look-like-a-panda-2-0-hit
My first guess would be some sort of reporting error as I'm not sure what would cause such a quick and drastic shift like that. Second, I would check if the entrance/landing pages in analytics are the same or not. Check the related pages to see if anything may have been changed on them to create this. Two of the sites I work on have had some changes over the past few months with respects to organic vs. direct traffic where our organic has slipped a bit more and more but it is turning into direct traffic, but definitely not on the scale of what has happened to you.
I had done about half of that... I'll take a look at all of it and try again tomorrow following your suggestions and see if I can figure it out then. Thanks.
I must be missing something or skipping a step or lacking proper levels of caffeine.
Under my High Priority warnings I have a handful of 404s which are like that on purpose but I'm not sure how Moz is finding them. When I check the referrer info, the 404 is being linked to from a different 404 which is now a 301 (due to craziness of our system and what was easiest for the coders to fix a different problem ages ago). Basically, if a user decides to type in a non-existent model number into the URL there is a specific 404 that comes up. While the 404 error is "site.com/product/?model=abc123" the referrer is "site.com/product?model=abc123" (or more simply, one slash is missing). I can't see how Moz is finding the referrer so I can't figure out how to make Moz stop crawling it. I actually have the same problem in Google WMT for the same group of 404s.
What am I just not seeing that will fix this?
Being able to separate things would be nice. When we scaled back things with our consultant, we switched our Moz accounts so my company pays for my personal account to have a Pro membership. But there's always the chance that someday I will move on to other things and then need to either hand over my personal account or my company will have to start a new account and lose the easy access to historical data which is under my name.
That might explain the listing of custom_crawl_issues.crawl_error.name in crawl diagnostics under High Priority... I was wondering where that was coming from.
Its a question of relevancy and user experience. If i do a search for "blue widgets" and see your blue widget link in the SERP but get taken to orange doodads instead... well, I'll be disappointed and bounce. That page will eventually stop ranking for "blue widget". So when doing a 301 you should make it as relevant as possible. If your blue widget link redirects to red widgets... well, that's closer. I might still bounce but there's a chance I'll stay to look at the widget. If the blue widget page redirected to "Blue Widget 2.0" then that's about as relevant a 301 as you can have. It will likely continue ranking (though the old link in the SERPs will likely swap out for the new one eventually).
Instead of doing redirects, there's always the option to keep the page up with a discontinued message and offer links to similar products on the page. If you don't want people bouncing because they were redirected to something they weren't expecting but really want to enhance the link equity and rankings of a specific page, you could keep "blue widgets" up with a discontinued message to "blue widget 2.0" and add a rel=canonical tag from blue widget to blue widget 2.0 to pass equity. Eventually the new page will swap for the old one in rankings, it will likely lower bounces caused by being shunted to a page you didn't expect, it gives people time to switch any direct links to the new page, and then after a few months you 301 the old page to the new page.
When on the crawl diagnostic page there is the column that says the number of duplicate URLs. If you hover over the little i symbol then the popup will show the other URLs. Also, if you download the CSV of the report there is a column called Duplicate Page Content which lists the URLs.
Edit: Unless I completely misinterpreted what you were asking and you want to know how to find out the exact words or meta that is the same across them all. In which case, Moz usually only picks up duplication of Title Tags, Meta Description and the actually page content. So if they're flagged as duplicates its just a matter of reading the pages or giving them a quick glance to see how similar they are.
I see what you're saying but in my head I'd it feels like using the same one everywhere would set off spam flags and/or wind up keyword stuffed more readily. For example, we have an image of an accent wall used in a commercial retail space made with artificial white brick paneling which could wind up in a gallery for brick, accent walls, and for retail design as well as potentially landing pages for retail design, accent walls, brick panels, white brick and in the gallery for the specific product line of that white brick... so I'd be worried making one 'perfect' alt tag that covers all those bases like "Old Navy commercial display with faux white brick panel accent wall" that appears in so many different galleries would be spammier and feel more stuffed.
For a while now we've had an outside SEO consultant as well as having me in-house doing a variety of work. One of the things our consultant would do was writing up optimized alt & title tags for the image galleries on our ecommerce sites.
Recently what came up was what to do when a image appears in multiple galleries (e.g. an image of a bedroom could appear in both the bedrooms gallery & the accent wall gallery). We're not sure whether it would be best practices to use the same exact alt & title tags for an image in all the galleries it appears in or whether that would be too much duplication and each gallery should have different tags despite being the same image.
None of it is being done in a deceptive manner, we're just been tailoring how we explain the image based on the specific gallery, landing page, etc. Our consultant is saying that an image should always have the same tags across the entirety of the site but I'm more of the mind that varying them by specific page would help images more readily rank for multiple relevant terms.
Any advice?
Google has been planning to scale back on the frequency of rich snippets. Matt Cutts mentioned this to an extent at PubCon in October (link). Google has also recently begun going after those that have spammed and/or manipulated rich snippets in a deceptive manner (link). Your stars disappearing may just be part of the changes in the works.
As to the homepage question, make sure you have your preferred domain set in Webmaster Tools (i.e. WWW vs Non-WWW). Pick whether your homepage should have a trailing slash or not. Have all basic iterations of that redirect to your chosen variation (e.g. homepage.com, homepage.com, www.homepage.com/index.php all redirect to www.homepage.com). If those parameters shown in your image don't significantly change the look or information on the page but are necessary then consider adding canonicals from all parametered versions of the homepage to the regular version of the page.
Good point. Wasn't thinking about that when I wrote my answer.
Google penalizes duplicate content which is deceptive, spammy, thin content, etc. If it's necessary duplicate content (like T&C legalese) then at worst they probably won't pass any equity to it. Here's an article on a Matt Cutt's video about Duplicate Content from July of last year http://searchenginewatch.com/article/2284635/Does-Duplicate-Content-From-Terms-Conditions-Affect-Google-Rankings
I've had this issue twice in the past. Once was because of a conflict between All In One SEO and another plugin which was fixed by updating the plugins. The second time was after a theme change where it turned out the theme had an internal suite of SEO tools that were adding the secondary description which was fixed by shutting off those features. Without knowing all the details of your wordpress site, its a bit hard to determine the exact cause. Check that your theme doesn't have a function interfering with All In One and try turning off plugins to see if any of them are in conflict. Could be as simple as needing an update.
The canonical should pass link equity similar to a 301 redirect.
I agree with Everett from a standpoint of User Experience. It could potentially be better for users if they appeared on a product page where they could then choose color, size, etc. variables for their product instead of having to click through multiple pages to find the right one or scroll through a huge list of variations.
The reduction in pages should also help consolidate link equity and keep pages from cannibalizing each other in the SERPs.
As for Takeshi's suggestion on Canonicals, I'm a fan of the rel=canonical tag but the potential problem with using them in this instance is twofold. 1) As Takeshi mentioned: "as far as Google is concerned you only have 1 page with the content on it" and 2) Canonicals are suggestions not directives so the search engines may choose not to recognize it if not used properly.
How similar/different are these products? Are they marketed to slightly different audiences? Do they have slightly different uses? Or are the essentially the same product but one is blue and the other is black? If they can be marketed to different groups or have a big enough difference that they should be separate pages then I'd consider doing some research and copywriting to add unique, relevant content to each to set them apart. If they're really too similar then determine whether they definitely need to be on separate pages still or if the could be merged. If they can be merged, choose which on stays live, update the page as needed and 301 the old page to the one still live. If they definitely need to stay on separate pages still despite being so similar, consider canonicalizing one to the other.
Similar to what BJS1976 and Takeshi stated, the way we handled the bulk of duplicate content issues from a similar circumstance for our ecommerce site was handling the different varieties of the same product through parameters and then canonicalizing the parameters to the version of the URL sans parameter.
For example, due to database reasons /product1.php?color=42 and /product1.php?color=30 are the same product but one is red and one is blue, the pages are exactly the same & have radials/buttons/dropdowns to choose any available color, /product1.php would default to one specific variation we chose (usually the best selling color) and then /product1.php?color=42 and /product1.php?color=30 had a rel=canonical tag added pointing at /product1.php
For any remaining products flagged as duplicates that couldn't be fixed that way, we set those aside to have myself and another copywriter work on creating further content that would set them apart enough as to not be duplicates.
And here's a Matt Cutt's video on ccTLDs from July of last year http://youtu.be/yJqZIH_0Ars
He also references how they handle those "cool" ones like .ly, .io, .it and so on.
Typically a ccTLD is suited for that specific country/region. So having a .co.uk will make you more relevant to searchers in the UK but not for searches from say the US looking for something in the United Kingdom (unless they happen to be searching through Google.co.uk). This is not 100% always the case though but generally so. If you're attempting to globally reach people searching for that term, you'd probably be better off with a generic TLD.
Looking at the two options you gave, I'd say it depends on how you handle your site navigation (If you're able to make the URLs however you want and aren't restricted by your CMS). For example, if from your homepage you have a category page/hub page for Services and then from there you can go to Service1, Service2, Service3, etc. then I would think company.com/services/service1.html is the most logical way of structuring your URLs. If there is no Services category page (or your homepage is essentially the Services page) then company/service1.html isn't a bad way to go. Personally I'm a fan of creating category/hub pages and more site content where relevant so I'd go the first way, create some great content for a hub page and then make sure the services funnel down nicely from there for the more targeted/longtail searches.
I completely understand wanting to target both. Since your title tag is too long, it seems to me that it would be better to choose one of the other to shorten it. Keep it in page copy and meta description where natural though... don't go changing that. But your title needs to be closer to 64 characters long. Plus Google takes into account pixel width to determine how long a title can be before it gets truncated.
As to the redirect. Link equity does get passed with a 301. As long as the links are relevant, authoritative, and/or not spammy then that's a plus.
Your title tag is too long and repeating "portable dvd player" twice in the title looks unnatural personally. Yes, I know one is "player" and the other is "players" but Google already understands plurals so its not always necessary to hit both the singular and plural of something in copy.
Checking Google Page Speed Insights, you're scoring a 57/100 on the site. I'd look into some of their suggestions to improve user experience.
I'm not finding any external links to that page using Open Site Explorer. You might want to bolster the page with some good inlinks and social shares to give it a boost.
What HTTP status response codes are they returning? If they're a 200 but providing a poor user experience then eventually they'll be deindexed or at least ranking so poorly that no one will run across them, if they 404 or 410 then they'll be deindexed eventually, and if they 500 then you likely need to fix some other things on your site. Also, are you losing any link equity by not 301ing them?
Are they listed in Google Webmaster tools? You should be able to find out where they are linking from that way. Also, if they're 404ing and they weren't ever a real page and they aren't really being linked from anywhere then there's no real reason to stop it from 404ing. You should probably make a custom 404 to serve people that directs them back to your site instead of a completely blank page.
The keyword tracking tool is for organic rankings.
Nope, that wouldn't cause a blanket penalty on the term. Just didn't have the time that ImWaqas had to go over things in-depth but wanted to at least point that out since I had the chance. Take a look at all the things ImWaqas stated to help fix your major site issues.
One oddity I did notice, you have <meta property="<a class="attribute-value">og:locale</a>" content="<a class="attribute-value">en_US</a>" /> listed in the page source which i believe should be en_GB for a UK specific website.
I agree with Federico and Keri. Don't shoot yourself in the foot by removing the old info just to make it a 2014 post when you can keep the old post (which is still very relevant for its year) and have a new post that will be relevant for its year as well. Plus the idea from Keri is a good way to organize them together, increase user time on site, and could help to lower bounce rates.
It is always best to do a one to one redirect instead of a chain. As Federico said, there is some pagerank loss when doing a redirect (though the exact amount is debatable and may be neglible) and redirecting A to B to C compounds the problem. On top of that, too many redirects in a chain will lead Googlebot to stop crawling the chain. One or two is fine, three or more is not. In this older video http://youtu.be/r1lVPrYoBkA Matt Cutts started talking about redirect chains at around 2:48 and mentions that one, two and maybe three in a chain is fine. This Whiteboard Interview from 2010 with Matt Cutts http://moz.com/blog/whiteboard-interview-googles-matt-cutts-on-redirects-trust-more also states the 1 or 2 301s in a chain. So if you're redirecting A -> B -> C -> D -> E -> F... you're possibly hurting yourself. Where possible you should change the redirects so its A to F, B to F, C to F, D to F and E to F. As for removing the redirects after a certain number of months, I'd check to see how many people are still linking in with that older URL. You'd want to ask sites linking in to update to the newest URL before you 404 it and lose those links. And if you're still getting tons of direct traffic coming in on an old 301 then you might want to do some digging & research before you cut off that traffic. Odds are though after a few months you wouldn't be getting as much traffic coming through on the older URL but there is always the possibility.
First, remember that canonicals are a suggestion not a directive. Which means if Google doesn't feel the need to honor the canonical then it doesn't. Also, I would check how you have parameter handling set up in WMT since that could be affecting how googlebot is crawling the page and how it views ?id= and ?option= on your site. It may also be, as Nakul pointed out, not really anything... sometimes WMT will show an error that isn't an error until it gets another good crawl of those pages and fixes itself.
Moz's crawler is rogerbot so that shouldn't block it as far as I know. What does the rest of your robots.txt look like? Maybe something in the format of it is accidentally disallowing more bots than you meant to block.
Yep... www.example.com, example.com, example.com/index.php, www.example.com/index.php can all be seen as separate pages if not handled correctly. Essentially you need to determine whether the WWW or Non-WWW version of your pages are your preferred and what your preferred version of the homepage is (slash, no slash, /index, etc. etc.). Then go into Webmaster Tools and set either WWW or Non as your preferred and set up 301s on your site to redirect anyone from the non-prefered to the preferred versions.
Tag archives shouldn't be canonicalized because the hope is that you would be adding more content to both over time that would negate the relevancy of the canonical. Most people seem to NoIndex their tag archives... which is a fine solution considering Tag Archives have negligible benefits in Search. You could keep them indexed if you would prefer and the duplicate content issue will be solved once you add more content to each. Also, keep in mind that Matt Cutts, in a recent video, said that duplicate content will not penalize you if it is non-spammy... it just won't rank well (or at all) while duplicate.
What sort of cards are we talking about? I immediately think "greeting cards" when you say that but I don't want to just assume that's the case. But if it is then from a personal user experience standpoint I would say that I would be more likely to search a specific range & gain more from finding a category page for a range of cards in the SERPs than I would from a page with an individual card on it. I.E. I'm more likely to search "birthday cards" or "get well cards" or "thank you cards" than I would to search "that birthday card with a grumpy cat that has balloons and a smushed cake". In which case I'd say go with robust category pages and, if possible, consider canonicalizing the individual cards to the category if you're going to also use the content of the parent category on the individual pages. If it's not greeting cards... well then I wrote all of this for nothing. (unless its playing cards... what I wrote might work for that as well)
Here's a two-fold answer: 1) As per Matt Cutts, duplicate content is not an issue unless it is spammy or keyword stuffed. Non-spammy duplicate content will mainly just be ignored (i.e. will not rank for anything). Too much thin content could get you hit during a panda refresh. http://searchengineland.com/googles-matt-cutts-duplicate-content-wont-hurt-you-unless-it-is-spammy-167459
Because Google has never broken its own guidelines and then penalized its own pages before.... http://searchengineland.com/google-penalizes-google-japan-16541
http://searchengineland.com/google-chrome-page-will-have-pagerank-reduced-due-to-sponsored-posts-106551
As a sort of "continuing education" grant for the New York State Board of Education, my mom (who has a PhD in Educational Technology) was tasked with teaching teachers in New York City high schools how to teach using current technology through a variety of web based courses. One of the courses she would take them through was a basic understanding of how the search engines work, why (to an extent) some pages rank for the terms they show up for, and how that can be leveraged in a modern teaching environment. She even used a number of articles from Moz, SEJ, SE Round Table, etc. It was mainly to make teachers better searchers though so they could more quickly find and access relevant and up-to-date information. So it was almost an Intro to SEO class.
Don't know of any universities that do true SEO course or degrees... or even how they would considering how quickly the landscape changes in this industry.
Yup, management wanted us to look into running a "contest" as well recently which got put on hold when we brought up this from FB's terms & conditions: Personal Timelines must not be used to administer promotions (ex: “share on your Timeline to enter” or “share on your friend's Timeline to get additional entries” is not permitted).
Facebook Pages can't always see every person who shared a post of theirs because of the privacy settings of the specific user.
There are no issues with connecting multiple sites to one email account in analytics and WMT. Would get pretty hard to handle if everyone needed a brand new email address for every single site they tracked.