If the "Subscribe" box is for RSS notifications, why not just have an RSS icon leading them to the feedburner page where they can subscribe? Personally I think that would be better than having a generic subscribe box just about the flashy newsletter signup. I'd figure a lot of people wouldn't understand what the difference was when first looking at your page and either mistakenly subscribe when they meant to get the Newsletter or overlook the subscribe box completely.
Best posts made by MikeRoberts
-
RE: Website subscribe form.
-
RE: Short Url vs Medium Urls ?
Looking at the two options you gave, I'd say it depends on how you handle your site navigation (If you're able to make the URLs however you want and aren't restricted by your CMS). For example, if from your homepage you have a category page/hub page for Services and then from there you can go to Service1, Service2, Service3, etc. then I would think company.com/services/service1.html is the most logical way of structuring your URLs. If there is no Services category page (or your homepage is essentially the Services page) then company/service1.html isn't a bad way to go. Personally I'm a fan of creating category/hub pages and more site content where relevant so I'd go the first way, create some great content for a hub page and then make sure the services funnel down nicely from there for the more targeted/longtail searches.
-
RE: Are Tags in Blogs good?
The only potential value in blog tags is for the sake of usability. The SEO values are negligible at best and most wind up noindexing Tag archives anyway in order to save themselves from duplicate content issues. The value of relevant tags on a blog is that they help tie together other relevant articles on a subject to make them more easily accessible and tie all the posts together with something akin to an overarching theme. Use them for the sake of giving human users another way to find related content that may keep them on site longer.
-
RE: Increase in 404's
A one-to-one redirect is usually best but as long as where you redirect to is the most relevant alternative then you are fine. If the original page that is now 404'd was receiving essentially no traffic and wasn't ranking then you can likely let the 404 stick and it will drop from the index (assuming it was there already). If one new page is relevant for 2, 3, 4+ older pages and there's nowhere more relevant to redirect them to, then it is perfectly fine to 301 all of those to the same place. What you don't want to do is blindly decide to bulk redirect everything one level up or to your homepage without doing your research first. You want to make sure that the new URL you're pointing to will serve your customers/visitors as best as it can.
And as Simon said in his response, Cyrus' post on redirects is a great resource for answers on what you should, or shouldn't, be doing.
-
RE: Will I lose traffic from Google for re-directing a page?
Its a question of relevancy and user experience. If i do a search for "blue widgets" and see your blue widget link in the SERP but get taken to orange doodads instead... well, I'll be disappointed and bounce. That page will eventually stop ranking for "blue widget". So when doing a 301 you should make it as relevant as possible. If your blue widget link redirects to red widgets... well, that's closer. I might still bounce but there's a chance I'll stay to look at the widget. If the blue widget page redirected to "Blue Widget 2.0" then that's about as relevant a 301 as you can have. It will likely continue ranking (though the old link in the SERPs will likely swap out for the new one eventually).
Instead of doing redirects, there's always the option to keep the page up with a discontinued message and offer links to similar products on the page. If you don't want people bouncing because they were redirected to something they weren't expecting but really want to enhance the link equity and rankings of a specific page, you could keep "blue widgets" up with a discontinued message to "blue widget 2.0" and add a rel=canonical tag from blue widget to blue widget 2.0 to pass equity. Eventually the new page will swap for the old one in rankings, it will likely lower bounces caused by being shunted to a page you didn't expect, it gives people time to switch any direct links to the new page, and then after a few months you 301 the old page to the new page.
-
RE: How to Stop Google from Indexing Old Pages
Have you submitted a new sitemap to Webmaster Tools? Also, you could consider 301 redirecting the pages to relevant new pages to capitalize on any link equity or ranking power they may have had before. Otherwise Google should eventually stop crawling them because they are 404. I've had a touch of success getting them to stop crawling quicker (or at least it seems quicker) by changing some 404s to 410s.
-
RE: Anyone else having webmaster tools delays?
The WMT error reporting issue has been driving me crazy. At first it was stuck on the 23rd but it did move forward to the 25th for me eventually... but that doesn't exactly help me at all. Hopefully they'll figure it out and fix it soon.
-
RE: Canoncial tag for Similar Product Descriptions on Woocommerce
I agree with Laura on this one. If the content of each page is 99% the same as each other (and/or 99% the same as what all your competitors are doing) then you're not going to rank and be found for these products; especially if there is an older, more established brand in your industry. Your best option is really to fill out those pages with more unique content. It can be daunting but you can get them to rank and be found with just a little bit of work. (Trust me on this, I used to work for an ecommerce that had a few hundred products [each with 7-12 micro-variations] that were legitimately the same thing as each other but with a slight color or texture difference at best... you'd be amazed how many ways there are to sell the same thing without duplicating copy.)
Throw together a landscape report, get an idea of all the various core terms in your industry, lay out a plan for what pages will use what term(s) and how, and if you don't have an in-house content writer it wouldn't hurt to look into hiring one (even part time) to get 89+ pages banged out for your site.
-
RE: How to Stop Google from Indexing Old Pages
After reading the further responses here I'm wondering something...
You switched to a new site, can't 301 the old pages, and have no control over the old domain... So why are you worried about pages 404ing on an unused site you don't control anymore?
Maybe I'm missing something here or not reading it right. Who does control the old domain then? Is the old domain just completely gone? Because if so, why would it matter that Google is crawling non-existent pages on a dead site and returning 404s and 500s? Why would that necessarily affect the new site?
Or is it the same site but you switched to Java from PHP? If so, wouldn't your CMS have a way of redirecting the old pages that are technically still part of your site to the newer relevant pages on the site?
I feel like I'm missing pertinent info that might make this easier to digest and offer up help.
-
RE: Traffic Discrepancy between google and moz
I think Roger got a little drunk. I just noticed a similar issue in one of my campaigns. Everything is perfectly fine in GA but Moz shows a 100% drop in Organic Search Visits from 8,000+ visits one week to 0 at the last crawl. All the other metrics look fine though.
-
RE: Not Ranking - Any Tips?
Its only been a month and its a moderately competitive landscape. So its possible it will take some time for all the changes to fully filter through and start ranking you better. Have you done a crawl request on the site, are your pages all indexed, is everything redirected properly that needs to be redirected, is your robots.txt set up properly, and have you seen any growth in important metrics in analytics since the changes were made that might signal to you the changes are starting to work?
-
RE: Will really old links have any benefit being 301'd
Personally, I wouldn't worry about redirecting a handful of pages that have been 404ing for 6-7 years. Odds are they don't rank for anything, Google has removed them from the index and they'll have little to no traffic going to them. Though I don't think they'll hurt you if you were to redirect them to a relevant live page... I just don't think you'll gain much from doing so.
-
RE: Google Search Console - > Google Search Analytic gives figure for google organic or adwords or combine of both?
Google Search Console only provides you data for Organic traffic. If you have an AdWords account and are running campaigns, you can see how those campaigns are translating into Paid Traffic numbers. In AdWords, you can connect your Search Console account for a specific site to the AdWords profile for that site to get a better comparison of how your paid & organic traffic interact (https://support.google.com/adwords/answer/3097241?hl=en). And if everything is set up correctly in Google Analytics, you can see in the Source/Medium or Channels sections how your traffic numbers are broken down by Organic, Referral, Paid, Direct, etc. which can help give you an even better understanding of how various types of traffic interact with your site.
-
RE: 301 redirects
Having too many 301 redirects in a chain can have a non-positive impact. I.e. Don't 301 a page to another page that 301s to another page that 301s to another page, etc. etc. etc. Google once stated they could do 5 pages in a 301 chain before giving up. But honestly, why would you choose to redirect to a redirecting page when you could point it at something much more relevant? But as for having a bulk of 301s, I wouldn't worry. If you had 300 different pages that were all being redirected to 300 other pages, google would not devalue you for it. If your redirects are relevant and are good for the user experience, then you're fine.
-
RE: Is there a tool to find a strange link
That odd URL is the link in the image on the right side of your site for "lose weight today with gastric band hypnotherapy".
I used Open Site Explorer to find what pages were linking in and they all mention "(img alt) gastric" which made it easier to pinpoint where on the pages the link was coming from.
-
RE: Duplicate Content- Archives
Simplest way is usually to set the pages to NoIndex.
-
RE: 4000 new duplicate products on our ecommerce site, potential impact?
Is the 4000 product site still going to exist or is it being stripped and moved to the 9500 product site? If everything is getting completely moved from one site to the other then you really do need to find out who has access to canonicals or 301 redirects so you can move the sites properly. If the smaller site is staying up and selling those products still, realize you'd be canibalizing your own traffic potentially and could wind up with shoddy rankings from all the scraped/dupe content.
Since you have no access to Canonical/NoIndex/Robots/etc. the question is, what do you have access to? Do you need to move all these products over? Are they exact duplicate of things you have on your site already? If its an exact duplicate of something you offer then you probably shouldn't add a duplicate page but you should canonical or 301 if you were able to. If they're close but have slight differences then you might be better served by adding a new product option to the existing page for the similar product in order to better serve the consumer, instead of diluting rankings with something so similar. Though you till might need that canonical or redirect to ensure everything is targeted properly.
-
RE: Duplicated Terms and Conditions?
Google penalizes duplicate content which is deceptive, spammy, thin content, etc. If it's necessary duplicate content (like T&C legalese) then at worst they probably won't pass any equity to it. Here's an article on a Matt Cutt's video about Duplicate Content from July of last year http://searchenginewatch.com/article/2284635/Does-Duplicate-Content-From-Terms-Conditions-Affect-Google-Rankings
-
RE: Canonical tags for duplicate listings
I agree with Andy.
In this case, there is no real reason to NoIndex/NoFollow these pages. Using rel=canonical makes sense... they provide a service and need to exist since they are individual job listings but leaving them as duplicates will hurt the site in the long run. So using a canonical to point at a Category page one level up in the site's navigation is perfectly acceptable and, from what I've seen, one of the more common uses of the canonical tag.
It's important to remember that a canonical is only a suggestion. It is possible for the spiders to decide not to respect the canonical tag if it appears to be used for manipulative purposes or if it appears that the pages are not relevant to each other. I don't believe that should be an issue in the case but its something to keep your eye on for a little while after implementing the tags.
-
RE: Duplicate content across a number of websites.
The problem, as stated by Logan and Don, is that if each of the 25 different locations are too similar then none of those are going to do well in the SERPs. You need to determine how much of each site is going to be too similar and/or duplicate content and consolidate that. One way to do that, as stated by Don, is a single site with local options.
Some achieve this by using geolocation or entering in postal codes & either choosing their local store or having site parameters alter product availability. The content is then restricted by the offerings at the visitor's local store instead of showing all available options from the overarching corporation. So the product pages still exist and are crawlable but some color options may be grayed out where they aren't available or "Out of Stock" warnings will appear where applicable.
One other option i've seen is using differing subdomains to offer up the same basic idea as geolocation/postal code but could help with local organic search. e.g. NewYork.Webstore.xyz vs. London.Webstore.xyz This would allow each location to essentially have its own mini-site that is on the company's main site (like a halfway point between one big single site and 25 duplicate content sites). Now with the single site altered by location data, you only need one version of a product page but you would need to write up some great localized landing pages for each individual store. For the subdomain idea, you'll want to canonicalize all the duplicates to a main version... so the page for NewYork.Webstore.xyz/ProductA/ and London.Webstore.xyz/ProductA/ would have rel="canonical" pointing at your main site's page Webstore.xyz/ProductA/ so authority is passed to the root domain and you don't get penalized for duplicate content.
-
RE: Deleting 30,000 pages all at once - good idea or bad idea?
I suppose the most important question is What will be replacing them?
You don't necessarily want 30k 404 pages appearing overnight like that and causing issues for visitors. Personally I'd say you should go through all the pages to determine what the most relevant alternative page is and then 301 redirect the old pages to the new ones. That'll be a lot of work for 30,000 pages but it would probably be the best way to save from inadvertently losing traffic, backlinks and positive link equity.
-
RE: Google displaying SERP link in Japanese
I tried doing searches from a simulated location in Red Bank and I'm not seeing Japanese characters anywhere.
-
RE: Handling redirects when 2 companies merge
Based on my limited experience of this type of situation, I've felt a good start is to Canonical each page on the old domain to its counterpart on the new domain and place a call to action on the old sites letting people know of the move and/or branding change. Submit for crawl request, let the bots begin filtering through the changes without affecting user experience yet. Then after some time, so your regulars have become acclimated to the upcoming changes and so have the bots, you can 301 those pages from the old site to the relevant counterpart they were already canonicalized to.
-
RE: Wordpress tags and duplicate content?
The SEO benefits of Tags tends to be negligible. Mainly they work together to help the user experience by tying together similar articles and increasing a visitor's time of site and pages viewed by offering them similar content that may draw their attention. The biggest problems with Tags (especially on Wordpress) is if you don't set them to NoIndex and have artciles to show Full instead of a Snippet on the archive page. When this happens you create a duplicate version on every Tag archive attached to the post, the Author archive, the category archive and the homepage. And a one-off tag with no other posts associated with it becomes useless because it creates more duplicate content without effectively tying your article to any other similar posts.
I almost always tell people with Wordpress blogs to switch posts to showing snippets on homepage and archives to lessen duplication, NoIndex archives, and (where possible) clean up & remove useless one-off tags that you don't think you'd use again and plan new content to add to the one-off tags you would use again.
-
RE: Duplicate page url crawl report
Also, if it is a WWW vs. Non-WWW issue, make sure to go into Google search console and set the preferred domain for your site properly.
-
RE: Updating old blog posts in Wordpress to appear more recent?
For the most part, trying to re-post your old content with a newer date in order to make it seem fresh when it isn't is really just an attempt to game the system. It may work somewhat in the short-term but its not a good long-term strategy. New, unique and relevant content would always be better.
-
RE: Banned Page
I'm having a hard time following. A bit more info would be helpful if you don't mind. What pages are these? Who were they banned by? By "banned" do you mean they were penalized and/or removed from the index? What 3rd party app are you using?
-
RE: Canonical tags and Syndicated Content
I have done these and I agree completely.
Also, the bit about Canonicals to move a site and then 301 later was actually talked about at SMX by Maile Ohye of Google as a legitimate and good use for situations such as buying or taking over someone else's site as a means to pass link equity while also giving users a better experience by letting them know you are transitioning... giving them time to change their bookmarks instead of potentially causing them to bounce by sending them somewhere they didn't intend to go.
(though don't quote me on her saying anything about "link juice" or "link equity" specifically... it was about a year ago and its been ages since I've listened to my personal recordings of the session [and actually, i'm not sure I was even actually allowed to record while Google and Bing reps were speaking... but oh well])
-
RE: Does a country specific TLD implicitly influence the full country name for keyword matching?
Typically a ccTLD is suited for that specific country/region. So having a .co.uk will make you more relevant to searchers in the UK but not for searches from say the US looking for something in the United Kingdom (unless they happen to be searching through Google.co.uk). This is not 100% always the case though but generally so. If you're attempting to globally reach people searching for that term, you'd probably be better off with a generic TLD.
-
RE: Duplicate Content Issues on Product Pages
Similar to what BJS1976 and Takeshi stated, the way we handled the bulk of duplicate content issues from a similar circumstance for our ecommerce site was handling the different varieties of the same product through parameters and then canonicalizing the parameters to the version of the URL sans parameter.
For example, due to database reasons /product1.php?color=42 and /product1.php?color=30 are the same product but one is red and one is blue, the pages are exactly the same & have radials/buttons/dropdowns to choose any available color, /product1.php would default to one specific variation we chose (usually the best selling color) and then /product1.php?color=42 and /product1.php?color=30 had a rel=canonical tag added pointing at /product1.php
For any remaining products flagged as duplicates that couldn't be fixed that way, we set those aside to have myself and another copywriter work on creating further content that would set them apart enough as to not be duplicates.
-
RE: Duplicate Content Issues on Product Pages
I agree with Everett from a standpoint of User Experience. It could potentially be better for users if they appeared on a product page where they could then choose color, size, etc. variables for their product instead of having to click through multiple pages to find the right one or scroll through a huge list of variations.
The reduction in pages should also help consolidate link equity and keep pages from cannibalizing each other in the SERPs.
As for Takeshi's suggestion on Canonicals, I'm a fan of the rel=canonical tag but the potential problem with using them in this instance is twofold. 1) As Takeshi mentioned: "as far as Google is concerned you only have 1 page with the content on it" and 2) Canonicals are suggestions not directives so the search engines may choose not to recognize it if not used properly.
-
RE: 301 Redirect Question
I ran a crawl on screaming frog as well. I don't see a problem with the 301s. They mostly seem to be pointing the non-www page to the www version... assuming you want the WWW version ranking over the non, then everything is fine. As long as everything is pointing to the correct version of the page then you shouldn't have any issues.
-
RE: Google Indexing Desktop & Mobile Versions
You can easily restrict portions with robots.txt depending on how exactly your site is set up. So for instance, something like:
Desktop site: http://www.domain.com/robots.txt
User-agent: Googlebot
Allow: /User-agent: Googlebot-Mobile
Disallow: /Mobile site:http://m.domain.com/robots.txt
User-agent: Googlebot
Disallow: /User-agent: Googlebot-Mobile
Allow: /Or
User-agent: *
Allow: /
User-agent: Googlebot
Disallow: /mobile/
Allow: /
User-agent: bingbot
Disallow: /mobile/
Allow: /
User-agent: Googlebot-Mobile
Disallow: /
Allow: /mobile/
User-agent: bingbot-mobile
Disallow: /
-
RE: Include or exclude noindex urls in sitemap?
You could technically add them to the sitemap.xml in the hopes that this will get them noticed faster but the sitemap is commonly used for the things you want Google to crawl and index. Plus, placing them in the sitemap does not guarantee Google is going to get around to crawling your change or those specific pages. Technically speaking, doing nothing and jut waiting is equally as valid. Google will recrawl your site at some point. Sitemap.xml only helps if Google is crawling you to see it. Fetch As makes Google see your page as it is now which is like forcing part of a crawl. So technically Fetch As will be the more reliable, quicker choice though it will be more labor-intensive. If you don't have the man-hours to do a project like that at the moment, then waiting or using the Sitemap could work for you. Google even suggests using Fetch As for urls you want them to see that you have blocked with meta tags: https://support.google.com/webmasters/answer/93710?hl=en&ref_topic=4598466
-
RE: Include or exclude noindex urls in sitemap?
That opens up other potential restrictions to getting this done quickly and easily. I wouldn't consider it best practices to create what is essentially a spam page full of internal links and Googlebot will likely not crawl all 4000 links if you have them all there. So now you'd be talking about maybe making 20 or so thin, spammy looking pages of 200+ internal links to hopefully fix the issue.
The quick, easy sounding options are not often the best option. Considering you're doing all of this in an attempt to fix issues that arose due to an algorithmic penalty, I'd suggest trying to follow best practices for making these changes. It might not be easy but it'll lessen your chances of having done a quick fix that might be the cause, or part of, a future penalty.
So if Fetch As won't work for you (considering lack of manpower to manually fetch 4000 pages), the sitemap.xml option might be the better choice for you.
-
RE: Should I keep writing about the same using rel canonical?
You don't just want to hit a keyword that you think is/will be most important. Organic search terms are all over the place and you can't account for every way in which an organic searcher could possibly find you. But the algorithm, and Rank Brain in particular, takes into account relevancy... how things are related to each other and why that matters. You don't just want someone to find you for [Ski vacations] you want to be an authority on Ski Vacations which includes ski rental tips, what equipment you should have, what clothes always warmth & freedom of movement, popular products, popular locations, and so on.
-
RE: Should I keep writing about the same using rel canonical?
I know how tedious it can get writing about the same thing over and over again. But it doesn't need to be "We sell product X, product X does this, Look at pictures of Product X" constantly rehashed. Check out related and relevant terms, go find an LSI keyword tool to find things that are related to your business without being the same core term you always use or do some research on competitors around the country to see how they handle this stuff and pull some ideas from them what you could be writing about. Find things in similar and related industries to tie your product and services to... an ice cream shop could write about local beaches; a towing service can write about regular tire maintenance; a moving company can write about history of bubblewrap; and so on.
-
RE: Website indexed but not ranking for anything
Without seeing the backend of the site and all the collected data, I can only speculate as to potential causes.
If there was a manual penalty, you'd be able to check that in Search Console/Webmaster Tools. Make sure your sitemap is uploaded to search console as well. Make sure there are no errors associated with it. Run a "fetch as" in Search Console to ensure that Google can properly see your homepage and submit to index. Build out the content. Build out the link profile. Determine your core search terms that you would like to be ranking for. Make sure to build out pages associated with those things. If things like the English Writing Course, the Premier League and the Newsletter Publishing are important then make sure they're higher on the page and more noticeable instead of in the footer. The meta keywords tag is not actually important but you do stuff a lot of keywords into it on some pages... consider toning it down a bit. Make sure your meta descriptions are well written and explain what the page is about. Don't just copy & paste a line or two from the paragraph of page content to re-use as a meta description.
-
RE: Homepage not indexed - seems to defy explanation
I took a look at all of the usual suspects as well... which amounts to pretty much everything that everyone else mentioned but I was intrigued by this issue and thought maybe another set of eyes might notice something that was off. Nothing was wrong in the page source from what I saw, no issues crawling it myself and I didn't see any penalties. Normally I'd think that if your homepage wasn't appearing for branded organic searches then a penalty was levied against you but when that is the case the homepage is still normally find-able in a Site operator search. M__aybe it is related to all the backlinks that were lost/deleted in the past month but I'm not sure why that would be the case unless removing the homepage from the index was a Penguin response to link issues... but I was under the impression that peguin was devaluing the link source not the link recipient and deleting/removing links seems to be a preferred method of handling penguin-related issues. So if there is a relationship between penguin and your homepage being deindexed then I am not sure at all why nor am I certain how to fix it as I'm not seeing anything in particular that screams "linking issue" at me. (though I only did a fairly cursory inspection of things)
So I am stumped. Whenever the issue is figure out I would love to know how/why this came to be.
-
RE: Homepage not indexed - seems to defy explanation
Glad you figured it out. I honestly didn't think it would have been the canonicals. I'm a little surprised that the bots didn't just choose not to respect the suggestion as opposed to blanking your site from the index. Didn't think that was even a possibility from incorrect canonicals. Good to know for the future though in case anything like this comes up with anyone else's site.