No problem!
As always, testing things out for yourselves will give the best results, so if you know you have titles nearing the 65-70 length, Google them for yourself to make sure the title is showing up how you like.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
No problem!
As always, testing things out for yourselves will give the best results, so if you know you have titles nearing the 65-70 length, Google them for yourself to make sure the title is showing up how you like.
Hey Arnaud
I think is is definitely worth testing for UX and CRO purposes, and I don't think you'll do your pages "harm", from an SEO POV.
If the overlays appear onclick when in the category, and the rest of the category page is readable and crawlabe, it shouldn't cause any problems.
What's great is that you've already considered how you could rank those individual products themselves by giving them their own URL. Those might struggle a bit though if the URLs are not linked to directly from within the product category silo, or elsewhere on the site.
However, I don't see that as necessarily a bad thing. Unless you have a specific product type that sells very well and has significant search volume itself, I'd wager that most of your inbound organic traffic would best be served by the category pages anyway (IE, if searching for blue widgets, the category shows all the widgets you have, not just one type). That itself is more likely to match the user intent of those people entering your sites.
I would just ensure that you nail the tech and onsite aspects of those category pages - and the rest should be fine.
I'm not surprised Moz is flagging those pages as duplicate content and I wouldn't be totally surprised if Google did in the future.
Put it this way, the pages are identical bar for a single sentence title description, a price and roughly a 20 word section describing the product. Everything else is identical. It's duplicate.
Look at it another through Google's eyes. Here's how the two pages look when crawled by Google:
(If that doesn't work, try yourself at http://www.seo-browser.com/)
Just look at how much text and HTML is shared between the two pages. Yes, there are key differences on the pages (namely the product), but the Google bot nor the Mozbot is going to recognise those elements when it crawls it.
Presuming Google ignores the site nav, it still has a bunch of text and crawlable elements that are shared - pretty much everything under the product description. It doesn't see the individual images and the flavour text is frankly too small to make any sort of dent in the duplicate content %.
I'd seriously recommend at revising how your product pages look - there's far too much repeated content per page (you can still promote these things on each page but in a much, much smaller way) and the individual descriptions for the products, in my eyes, are not substantial enough.
It's also worth remembering that blocking a URL in a robots.txt file does not automatically mean that the URL will be deindexed. The robots.txt file will prevent robots, such as the Googlebot, from seeing or accessing the file and that's all. Now obviously, if a robot repeatedly tries to access a file and gets denied it will eventually stop trying to, which is what leads to the file being deindexed (the same principle applies to 404 and 410 errors).
Therefore, if you want a quicker and more definite deindexing solution, you should use the explicit noindex robot command, as recommended above. This will tell any visiting robot to not index it straight away, which will reduce the number of revisits and drop the page from the index faster.
It's my understanding that the Googlebot can read this text, regardless of .css styles. You can actually check this yourself by putting in the page URL on this website (do a simple search for a free report). That browser fetches your page as the Googlebot would see it, so you can see if the content is read by Google or not.
Now, as for whether or not this might be deemed duplicate content, I don't think you have much to worry about as you have already taken necessary steps to prevent any penalty. Implementing the canonical tags that you have will tell Google that any duplicate content there is for a user reason and is not trying to game the system.
Provided those tags remain in place, I think you'll be fine. A problem may occur if you're building outbound links and hiding them using display=none or other .css styles. This is a big no-no and can get your site deindexed if Google finds it. Always worth bearing in mind for people on your team, but it looks like you've got everything under control!
Do you mean meta description or meta keywords?
If meta description, I think it does still have a small benefit to your SEO if optimisted properly, but more importantly the call to action function of it makes it very worth-while. I'd always put a custom one in, rather than let Google/Bing pull one in for me.
As for meta keywords, I think their time has past.
Google says they do not use the tag when deciding where to rank your site. We have to take what Google says with a pinch of salt, but I believe that this is true.
Bing and Yahoo are said to still take the tags into account - although even they admit it's a very small influence to their ranking.
I think why it has been abandoned by many, many sites is because it is a pretty clear give-away to a competitor over what keywords you're trying to target. All they have to do is look at your source code, look at your meta keywords and then they can see what keywords you might be trying to target.
For me, their time has passed and I don't use them at all when working with clients or on new sites. I believe I'm in the majority, but you certainly won't harm your site by using them, provided they are not over-optimised.
Looks like you've used the right format and that Google will be able to read the text behind the "read more".
You can actually test this by using services that return your website as how the Googlebot sees it. Seo-browser is one of them, and the simple search is free.
Here is the result for that webpage.
You can see the text behind the "read more" script on that page, which would indicate that Google isn't having a problem seeing it.
If you can export all of the URLs you want to check into a list, there's a few tools you can import them to to check the response code.
I think Screaming Frog does this, but a web-based solution which is quick to use is this one by Tom Anthony. You can also define what user agent you want it to test as.
All your 301s will come up listed just as that, same for 302s, 404s etc.
The general rule of thumb is 70 characters (with spaces).
The reason why there is discrepancy is that Google truncates titles on width, not actual character length. A 'W' is wider than a 'd' and so if you were to have 70 'W's, for some strange reason, the title would be shortened earlier.
If you're feel you're using certain "wide" characters in your titles, maybe they reoccur in your brand name (eg Wesley's Wheels), then 65 characters might be a better number to aim for. Other than that, 70 characters is usually fine.
Hi Victoria
Seeing as you're now on Wordpress, there's a couple of plugins you may find useful.
First is Link Juice Keeper - this will redirect any 404s that are caused by any external sites linking to you. The links are redirected to the root domain to ensure any strength is passed.
Another is Broken Link Checker - as you may have guessed, this will notify you if you have any internal broken links.
Link Juice Keeper might sound appealing, but I don't think you can redirect certain pages to specific locations, I think it all just goes to the root domain (but I could be wrong).
Screaming Frog is another great tool that would let you find broken links.
Hope these help
I can definitely feel a big update coming. Either targeting fake social signals or making a real crackdown on disguised paid links.
In one of my other projects, I've seen a lot of volatility in the top 10 for a keyword - with all the major dancers having a huge amount of paid blogroll and homepage links in their link profile.
If Google is cracking down on them, I'd seriously love them.
I've seen a number of the sites that I work with hit around the 10-12& Bing mark for a while now. I can't say I've seen any change of the magnitude that you're reporting, however.
With the launch of IE10 and perhaps with growing discontent towards Google, people may have switched to Bing as it's the default search engine in IE10, which for all intents and purposes is a pretty polished browser.
I'd maybe look to see if there is any correlation between browsers as well and if IE saw a big increase too.
They might be reporting that the keyword doesn't appear in the URL after the domain and TLD. What I mean is, it may be saying that because the keyword isn't appearing like this: "http://www.domain.com/internet-marketing", the platform may not think it's appearing.
Whatever the technical reason is, I wouldn't worry about it much for two reasons. First of all because, as you say, the keyword is there in the domain. Second because having the keyword in the URL is a pretty insignificant ranking factor. This report is a little dated now, but the SearchMetrics ranking factors show little correlation between the keyword appearing in the URL and the pages visibility in the SERPs. I'm not one to optimise based on correlation factors, nor do I take everything in that report as gospel, but I am inclined to agree that, while having your keyword in the URL might provide a slight benefit, it's certainly not a definitive factor.
That's why I wouldn't worry too much, in your case, and move on. Hope this helps!
With the greatest respect, if you want advice, you need to be completely transparent. To say your website does not have any blackhat or even greyhat SEO is just not true.
Literally the first thing that stood out for me in your OpenSiteExplorer links were you website appearing on a site-wide blogrolls such as these: http://www.blogohblog.com/wordpress-theme-businezz/
Whether you built them or had any knowledge of them or not, you need to understand that if Google is seeing your website appear in site-wide blogrolls, all with the same exact match anchor text, surrounded by completely irrelevant website links to diet/workout sites that look so obviously a paid network - you site couldn't look any more blackhat.
Throw in some random blog comments and some .edu spam to boot and I am beginning to see why you may have been slapped by Google.
You've got some serious TOS violations here. If you've been employing a company to do your SEO for you, or any other company that has been working on your website, I'd suggest you contact them ASAP and demand an explanation.
Hey there
The way the disavow tool works is that, once the file has been processed (which is pretty quickly), Google simply ignores those links - effectively giving them a nofollow attribute.
That means that the webmaster tool link report will always have the link still in there if it exists, even if it has been disavowed.
The disavow process is simply dependent on the file uploaded to be in the right format and for it to be processed. Once that's done, it will be in effect straight away, it does not require a recrawl or cache.
It's worth pointing out that if you upload a new file and a link you once had in the file is now longer there, it will be considered again.
Once your disavow file has been uploaded, probably a good idea to wait 2-3 days to make sure it has been fully processed. But once that's done, your reconsideration request will take into account the file - particularly as the reconsideration request is a manual review. If they can see the file processed, it will take into account that the links have been disavowed.
Hope this helps
Hi there
First impressions - I quite like it. It's clean, the navigation looks good, I think the images could be utlilised a bit better, they're a bit small and you can't read the call to action in one of them (which, given your keyword, is a bit of a sin)
Nice use of H1s, H2s, H3s. Nothing looks stuffed/unnatural which is great. Looks like you've got nice interlinking throughout the site, too.
I think you could benefit from a bit more text content. It's a bit thin when you look at it through Google's eyes.
There's something though that I caught when the page was loading. That is, it was a bit slow to load.
Here's a GTmetrix report of the page. This looks at the speed of the page and improvements you could make to it. You'll see here that one of the biggest recommendations is to use browser caching. There's a guide on GTMetrix to help you do that.
But I do like the page - it's not flashy for flashy's sake and it's got a solid base from an SEO perspective. Maybe you could also include a portfolio of some of your design work or a few testimonials from previous clients over the work that you did. Might add a little extra trust to your brand and would also give you a chance to really wow a client with some past designs.
This may (and I stress may) have something to do with whatever algorithmic update/refresh that many have experienced since the 17th. I've seen before with refreshers that old URLs pop back into SERPs, only for them to correct themselves a few days later.
Of course, just to be sure the 301 has not broken, click through on the link that's appearing to see if it takes you to the new landing page.
Given a few days, I imagine you'll see the proper URL displayed again.
Hi Mike
Alexa rank basically works on how much traffic your website is receiving. In order to get a better score, you will need more traffic to your site.
One of the best ways to increase traffic is to reach out to communities that are relevant to your website. See if you can find blogs in your industry that have an engaged audience (comments, social shares etc.), see if there are any industry forums you could participate in, have a look on social networks for any groups about your industry, particularly Facebook, LinkedIn discussion groups and Google+ communities. Finding these places with an active audience offers the opportunity to increase traffic.
That can be achieved by integrating yourself into the community. That does not mean joining the group/forum, dropping your link and waiting for clicks - that's not integrating into the community. Integrating means becoming part of the community - help out fellow members, offer bits of advice, produce content or tools that would be of high value to people and so on. Helping the community will bring traffic as a result.
I will say this, however: Alexa is notoriously unreliable and as a general metric, I think it's fairly poor. I wouldn't concern myself with what Alexa has to say about my site, or my clients' for that matter.
When you're reaching out to the communities as I mentioned above, do so to build relationships, not traffic. By building relationships, offering things of worth to the community members and generally being helpful, traffic will follow as a result. Not only that, but that traffic becomes much more qualified and probably has a higher chance of converting.
Hope this helps.
Presuming that you're going to be linking to your website in the comment, I'd probably keep it limited to your own niche, but there's definitely room for a few comments from other industries.
If your comments are engaging, provide value to that blog's community and are not on blogs that are spammed to death, then you won't be doing any harm. The key is to comment something of worth and to integrate yourself into the community.
I always think of blog commenting as a way of establishing a community presence and to raise unaided brand awareness. Any subsequent link of page strength is an after-thought for me.
This is a fascinating question.
Regarding your question about 404 pages getting a 200 status. So obviously, Google doesn't index 404 pages, and de-indexed pages do not pass on link juice. However, like you say, some people and sites link to 404 pages and so, were these ever to go live, you'd imagine it would have some sort of strength/authority.
But how could you practically accomplish this? If you make the 404 page a 200 page, you've now got no 404 page for your website, which could be very bad indeed. So, you'd probably want to substitute that page with a new, fresh 404 page. But if that sits as the 404 page and gets marked as a 404, wouldn't the links become void again?
If you then moved the old 404 to a new page, it loses the links once pointing to it.
The hongkiat webpage is a really clever idea as it takes all those pages and makes a shareable hub, which of course then gets all the links and strength.
Hi John
I'm not going to get into whether removing these links was worthwhile (in short: I can't see them having either a positive nor negative effect. The effect would be nil) but let me be clear about the main matter at hand: get your money back ASAP.
There's no reason at all for you to have to pay for links to be removed when something like the disavow tool exists - in fact I'm fairly convinced it was one of the reasons why it was developed in the first place. You've essentially been extorted. There is no sense in this - otherwise, I could easily find your website, build thousands of bad links on a crap network I could set up and then threaten you with them until you paid me to remove them. I'm not going to do that of course, that would be despicably evil!
If you're concerned about those links then disavow those links - don't give into demands like this.
Now, as for getting your money back - I think you could successfully dispute this with PayPal. You have paid for a digital service and the seller has failed to deliver (and refuses to). The work implied the links would be removed, yet the links still exist in some form. The seller is effectively in breach of contract, so you should get your money back. For better of worse, PayPal does favour the buyer in the majority of cases - this case it is definitely for the better.
I hope you get the money back in full.
Yeah, that's the general consensus around blogs, subfolder >subdomain. Reinforced in this Moz guide.
However, I recall someone dispelling this myth with backup from Google or Matt Cutts that they are treated no differently from each other. There's also this post that concludes that there isn't a difference. Aaron Wall, well respected in the industry, thinks subdomains are arbitrary and aren't treated as a separate case.
My preference is to remain with a subfolder - I genuinely think it looks a bit neater. Consensus is somewhat split over the whole "which is better for SEO", I'd stick with a subfolder if it's easy for you to implement and you're not too bothered about presentation in the URL.
"I am compltely transparent and this blogohblog link isn't supposed to be sitewide and it is currently under investigation.
I don't think i've got any serious TOS violations here to be honest as this is probalby the only or one of the only greyhat links that we have."
I think we're going to have to agree to disagree here.
http://www.blogperfume.com/ - Optimised anchor text in "friends" section, looks very much like a bought link, violates TOS
http://www.psdeluxe.com/ - Looks to be another bought link - violates TOS
http://ilearntechnology.com/ - Random anchor text heavy link at the bottom of the page, clearly out of place, probably bought, violates TOS
http://www.blogherald.com/2008/03/24/easiest-website-builder-ever/ - Low quality blog commenting
http://www.designknock.com/ - Blogroll link (actually separate from the blogroll, making it look even more suspicious). Likely bought, violates TOS
http://www.webstreamingsmania.com/ - Questionable article hub
http://jawbreaker.hardware-one.com/forum/read_msg.php?tid=368&forumid=feedback - Low quality forum signatures, looks to have been done for link diversity
http://news.jrn.msu.edu/onlineenvironments/2012/01/11/kates-experience-and-expectations/ - Edu comment spam
http://clubs.uci.edu/fada/?p=1674 - Edu comment spam
Do I need to go on?
Look, I'm not doing all this to be evil, come across as holier than thou or even to victimise you - but just by going through the first 2 pages of your OSE report I found these links which are either likely violations of the TOS or low quality. If I can see that, there's a decent chance that Google has seen the same.
It's not to be cruel to you, but it's to bring about the magnitude of the potential problem. I know what the warning signals look like because I've worked with a number of sites that, upon auditing their link profile, have had similar links and have been penalised. Whether you're aware of these links or not is not the point, all I wish to do is make you aware of the potential minefield you're in so you can start actioning on it and perhaps ask some serious questions to any SEO agency you might have worked with in the past.
Hi Andy
Think it's very wise of you to have considered this potential duplicate content problem.
Having a rel=canonical tag on the separate categories, or even a tag on them, would make sure that the URL is not indexed in Google, thus removing any potential duplicate content.
I can't really see a way of having both a main deals URL and a category deals URL both being indexed and ranking because, as you have said, the pages would either have zero content or duplicate content.
With that in mind, I think you're current format is the best one. Having a big /deals page with all your offers on it will hopefully provide lots of rich content so that people will link to the page, which in turn will rank the page for a number of keywords - while you're also allowing people to filter down and get to what they want. Just make sure that the separate sub-category pages have either enough unique content on them, or a canonical/meta noindex tag on them to avoid a duplicate content issue.
I would say though that as you're site aggregates daily deals, you are at a bit of a risk of still supplying duplicate content from other sites. I'm not sure how the deals are fed in, but if you get a rush of deals en masse from one website and the deal's titles and descriptions are all the same, this might also be seen as duplicate. If you can off-set this with unique content on the page and a system to put in your own titles/descriptions, then it shouldn't be a problem.
Good question this - would be interested to see some other Mozzer's POVs.
Hi there
No, the canonical will not pass the meta robots directive to the original page, so you're safe there.
What you're effectively doing is using two ways to prevent duplication - the canonical will instruct web crawlers not to index versions of the URL with query strings, just as the noindex,nofollow tags will.
Nothing wrong with using two methods simultaneously to do this - always a good idea to be safe - and so the end result will be that the URLs with query strings will be very, very unlikely to be indexed.
Hi John
No, if you put the redirect in place it won't create duplicate content. In fact, redirections are often used to avoid any potential duplication problems.
The redirection tells the Google bot that the old URL is no longer required, the URL it points to is the correct one. This will tell the bot to stop indexing and crawling the old URL, pass on any of the links pointing to the old URL and consider the new URL to be the definitive article.
Out with the old and in with the new, so to speak!
You can read more on redirection with the Moz Guide.
Hope this helps!
Abdul is right, looks as though WMT is having a bit of a hiccup today. Here's a link to (one of) the discussions going on in the products forum. Looks as though Google is aware of the problem. My guess is that it might have something to do with the Google PageRank update.
Furthermore, as far as I am aware, Google does not remove any disavowed links from your WMT report. Source A, Source B. Think the link has to be physically removed from the site in order for WMT to stop reporting it, even then it may take a long time.
Hi Kate
From an SEO perspective - is there anything "wrong" with your current domain, that you can see? EG - do you have a bad backlink profile, do you think you're affected by a Google penalty? If the answer to any of these is "yes", you might want to consider a new domain.
As a whole, I don't see any benefit with a subdomain solution.
But really, that's where I think the SEO considerations stop. Look at it more from a business or branding point of view. If this is a significant relaunch and rebrand, do you think that a new website would coincide with the relaunch and give it extra emphasis? Is it a significant change of direction and do you think updating your current website would confuse or isolate your user base (as you're now something completely different)? If yes, I'd look at a new domain.
But if you're just updating the look and feel of your site, or even if you're changing your marketing message, look and approach, but in essence you're the same company with the same values and user base, then I would stay on your current domain. You'd get the benefit of your currently existing links yes, but really you'll be keeping continuity and you won't be interrupting your current users' journeys at all.
So there is a small SEO consideration here, but beyond that this is more of a business decision. If you're completely changing the way you're doing business, go with a new domain. If you're updating the brand and its appearance, but keeping the core values, then stay as you are.
Hope this helps.
Hi William
Just as a quick note, if you click on the "1" in the Other URL column, it will take you to a page that lists the actual URL the report is flagging as duplicate.
Hi Craig
You touched on one of the reasons this is happening in your post - you could external links to these pages. Also, they could still be appearing in the sitemap.
If you go into Webmaster tools > Health > Crawl Errors > Not Found and then click on one of the URLs, you can check whether or not the page is in the sitemap or whether it is being linked to from somewhere.
If you have external links, you have four options. First, you could attempt to change the URLs on the pages they're being linked from. This could be difficult and/or long. Second, as you say, you could 301 redirect. This would be useful if people are coming through those sites still, as you'll be fixing their user journey. It would also pass on any link "juice" that page has to another. Third would be to start returning a 410 error. This explains 410 response codes - it basically tells the Googlebot to treat the URL as gone permanently. This can be a bit tricky to setup and you have to be sure you want use the URL again in the future.
Finally, you could leave the 404s in place. If none of the pages have any strength, no referral traffic is coming from them and they aren't interrupting a user journey in any way, I would simply leave them. Google knows that 404s are just a matter of process and so recognises that 404 errors are simply a natural occurrence. It would only ever be a problem if you returned tens of thousands of them, so you may just want to leave them be.
I would probably 301 redirect any old pages carrying strength to relevant equivalents (if not, the root domain) and leave the other 404s in place. I would rewrite ASAP any URL that is interrupting a user journey.
Hope this helps!
Hey there
To Quote Google on this, with the issue of ASCII and UTF encoded characters, like Arabic:
"Yes, we can generally keep up with UTF-8 encoded URLs and we’ll generally show them to users in our search results (but link to your server with the URLs properly escaped). I would recommend that you also use escaped URLs in your links, to make sure that your site is compatible with older browsers that don’t understand straight UTF-8 URLs"
So their recommendation would be to have both URLs available (the English and the Arabic) in order to support all users. So the fact you already do this is a good thing.
The next step would be to make sure you are handling duplicate content correctly. If the Arabic and non Arabic URLs are linking to a page with the same content - Google _should_be able to recognise this as the same page and not penalise you for duplications. So if the Arabic URL and the "escaped URL" (ASCII/English equivalent) both go to the same page, you should be fine. I've experienced this quite a few times with Turkish websites, for example, that also have UTF encoded characters.
However, you can eliminate the risk further by adding a canonical tag to each page. As far as I am aware, the canonical tag will support Arabic characters and so, on each page of the site, add a canonical tag that points to that page. For example, with the URL above, you would want to place a canonical tag like:
You can read more on canonical tags here: http://moz.com/learn/seo/canonicalization
Do be aware that for XML sitemaps, the URLs in the sitemap need to be URL-escaped - that is to say, UTF encoded URLs need to be made into their ASCII equivalent. You can read more about that in this Google guide to using non-alphanumeric characters in Sitemap URLs.
Hope this helps.
Hi there
First of all, does your site use an .htaccess file? This is probably the most common solution used to implement 301 redirects. The SEOMoz guide to redirection is really useful in explaining how to implement redirects for apache servers and others.
These three generators can create the code for you, but it's very important to learn how the .htaccess file works before trying to implement it. Therefore, you should acquaint yourself fully with the Moz guide linked above.
Alternatively, if your site is running WordPress, you can setup 301 redirects very easily with the Yoast SEO plugin.
Hope these resources help.
Very good point you've raised - 301ing those URLs effectively makes the links to your site "live" again. If the links sit on a dodgy/spammy/poor quality page, then it could harm your site and I wouldn't put the redirect in place.
By in large, if you're beginning to doubt whether the link is worthwhile or not, chances are its not. So if you have a bit of doubt about the link, then don't put the 301 in place.
Hi Joshua
From personal experience (won't cite external sources here) working with a number of sites:
The mobile friendly update - I've definitely seen an impact in mobile search. I think there's quite a clear handicap in mobile search if the webpage you're trying to rank for isn't mobile friendly (while desktop search looks to be unaffected).
Regarding SSL - I'm yet to be convinced. I've yet to see a strong correlation - or any correlation for that matter - between websites switching to HTTPS and seeing their rankings improved. I've followed my own sites, competitor sites, and industry trackers like SerpWoo on this and I don't think the impact is quite there (yet). In fact, I've seen more cases of companies migrating to SSL seeing rankings drop than improve, because of the problems they have faced in the migration.
In short:
Mobile Friendly - I do see this as a must for SEO.
SSL - treat it as a business decision, not an SEO one. If you're looking to integrate an onsite payment solution, for example, it makes sense for the site to be SSL.
Not at all, I'm happy to help!
I can only presume that the theme query you're getting is related to the WP/Joomla theme you're using. Wouldn't be able to help specifically without seeing it, but I would assume that the URL without the "theme=default" at the end should be the canonical URL.
For more stringent decisions - if you have a big amount of URLs and you're thinking of redirecting some, I'd start by looking at your analytics traffic - has any visitor come to your site via that URL within the last 60 days? If yes, I would definitely redirect. If no, I'd ask this:
Do you have any inbound links to that page? If you put in the root domain into Open Site Explorer and click on top pages and export the results into a CSV - you can see which pages have inbound links. Those without links can be ignored, those with links should be 301'd (provided you are happy that the links are of a good quality), in order for you to preserve the link equity, or SEO 'strength', of the link.
Hi there
Well, in theory, most if not all of the "strength" or your links will pass on to the new site if you use a 301 redirect. We've had a recent Matt Cutts video talking about this.
In order to streamline the process, I would replicate an identical site structure on your new .com site. Same /sub-folders/, same primary article names, as similar as you can make it to you .net domain, the better.
This will allow you to 301 redirect the old domain to the new one, pointing the equivalent pages and sub folders to each other - so domain.net/sub-folder/ to domain.com/sub-folder/ and domain.net/article1.html to domain.com/article1.html. This way not only are you ensuring that the user is following the same path as before, but all of the "strength" and previous links are being pointed to their new, equivalent pages.
It's such a big help if you can keep the site structure the same. Now, there may be a case for not wanting to redirect everything - thousands and thousands of 301s can slow down the .htaccess file, not to mention the time it may take. Some pages may not be worth transferring anyway if they have no link juice or are never visited by users. In this case, it's perfectly acceptable to let these return a 404 error.
If you're looking to get the URLs you want to redirect on bulk, look in your XML sitemap. Download that and extract the URLs from there and place them into Excel. Most of the time the listed pages will be the ones you want to redirect. Copy the list into another column, so you now have 2 identical lists. Then simply use the Find & Replace tool on one of the columns, changing .net to .com. You've now got all the URLs you'll want to put into your .htaccess file for redirecting.
Finally, it wouldn't hurt to contact some of the webmasters on the sites where you have your best links. If you tell them you've moved to a .com domain and only that needs changing, they can do the leg-work for you and can ensure that your new domain keeps its strength.
Hope this helps - good luck with the move!
Very much of the same opinion.
It's one of those 1 percenters. Won't have a big impact, but accumulatively with other things, it's a good idea.
If you can control that element of your SEO, no reason not to do it.
But if you're working on a CMS/Ecommerce system that maybe interferes with this and generates its own title names out-of-the-box, I wouldn't spend too many hours trying to fix it. Your time is probably better off elsewhere.
Hi there
I definitely admire your creativity on this one, but unfortunately the Google Tag Manager loads tags asynchronously. That means that the tags are added for users who execute javascript on page loads, but it means that the copy of the page that is crawled does not contain the tags. All of this means that the Google crawler won't see a noindex tag on the crawled version of its page if it's loaded via the Google Tag Manager.
I think your reasoning for noindexing the pages themselves is a very good one. You're removing pages with thin or potentially duplicate content from Google's indexing, which is healthy, while keeping the content on the page for the user, which again is a healthy thing to do. I can definitely see the reasoning.
Unfortunately, the only way I can see the tags being implemented is manually by the webmaster. If the site runs wordpress, you can change the robots meta data very quickly in the Yoast SEO plugin. If you get to the page in question and scroll to the Yoast plugin section, you'll be able to select the noindex tag from a drop-down menu, meaning it can take as little as 30 seconds.
Hope this helps and good luck with the implementation.
Hi Misi
Don't mean to be pedantic, but can I ask: "Why?"
In previous years, I think Wordpress was a better platform than Joomla in terms of SEO. But that gap has definitely decreased and not too long ago there was a great blog post on making your Joomla site SEO friendly.
I don't see a major difference between the two nowadays. Jooma isn't as friendly straight away as Wordpress, but can be if done correctly.
Unless you're absolutely set on switching to WP, I'd maybe save yourself the hassle and optimise your Joomla website better. If you already have good rankings, I think it would be a mistake to change simply because wordpress is seen as "better" for SEO.
Possibly. Internal links and their anchor text can certainly give Google a priority on what the page is about, and which preferred landing page you want to rank.
However, there can be more to it as well. How much does the sub-page 'talk' about the keyword? What is its content like? Do you have any canonical issues? How about the homepage, how much content is on there about the keyword.
You could be cannibalising efforts by having a number of pages all talking about the same thing. Content is quite often just as 'confusing' to Google as the internal links.
If the pages will be exact duplicates, you could do either of the options you've given above, or you could use a canonical tag and point it to the original page.
My person preference would be to add a noindex tag on the head tag of the code, so it would be:
Of course, this means the page won't be indexed which is a shame as a knowledge base can be a great way of pulling in long-tail keyword traffic. If you ever wanted to rank it, however, the content would need to be made unique.
Hope this helps.
Hi there
Now, I've not used that plugin before, but I have helped to successfully migrate a legion of 8 sites to a new CMS, in the process going from .HTML to .asp keeping the rankings in tact (actually improving a little with fresh content), so hopefully I can help.
We ended up 301 redirecting the old URLs to the new ones, which worked for us. It was made a hell of a lot easier by keeping the same link structure, which I see you're planning on doing. I can't stress this enough, it helps so much if you can do this and replicate the structure.
We ended up dropping a few pages in the migration. We asked two questions of these pages - a no to both resulted in letting it return a 404, which is OK (don't be afraid of returning a few). If the page was either a) bringing in traffic at the first point of entry or b) carrying some link "juice" from external links, the page was 301 redirected to the nearest equivalent page in the new structure.
Now, none of this involves that plugin - I'd be curious to see if anyone has used it. I like the idea of it if it effectively means your URLs stay exactly the same. However, I'm just here to say that 301 redirecting has worked for me in the past. We've read recently that a 301 will pass all the previous strength of the link, which is also some comfort.
As I said before, it was made infinitely easier by keeping a consistent URL structure. If you can do that, which you're aiming to do, minus the new extensions, it can be a quick and fairly painless process. If you want some advice on how to quickly get the 301 lists ready, let me know (hope you like excel!).
Hope my input helps, but I'm definitely joining you in wondering if anyone's used that plugin. Failing that, 301s can help preserve rankings.
I'd go ahead and 301 redirect those websites and pages.
With a 301 redirect you will also pass on any link equity the infringing websites once had, which in turn may help your organic ranking performance.
However, in order for that to happen, you need to ensure that you redirect the individual pages on those websites to the most relevant/equivalent versions on your own. Otherwise, you may see those 301 redirects treated as soft 404 errors.
Hope this helps.
The Mozscape index, as brilliant as it is, can in no way compete with the size of the index that Google can handle.
As a result, your WMT report should always have a bigger amount of pages, links etc crawled. It's just bigger.
You're very welcome Candice - I've gotten a bunch of inspiration and motivation from this site from other members in the past, so if there's ever a chance to impart the same on someone else, I jump at the chance.
I do like the Q&A idea, as it opens you up to being both a service and an educational resource, kind of like the guys at Distilled, with their SEO service but things like their blogs and their DistilledU ideas. So it can and does work. It also gives you a long lasting effect from your social media interactions.
All the best going forward, can't wait to see what you can come up with!
This may have something to do with Google's recent change to show results as country-specific by default, rather than whichever TLD you use (.co.uk, .de, .fr etc).
This is causing a few rank checkers to throw off a few wild results. I've seen all the major ones be affected by this.
If you can't recreate the results and traffic is normal, don't worry too much, as the software people will be making fixes soon.
FYI - if you want to get round Google's change and still get specific results from a specific country, you can add:
&gl=us
&gl=uk
&gl=fr etc.
To the end of your query string. Replace the country code with whichever you need.
The idea of the canonical tag is to help search engines identify the original URL and content and to ignore any versions of it. If set up correctly, search engines will ignore and/or deindex the page with a canonical pointing to another page.
Therefore, on that premise, you are asking search engines to ignore the other page and all of its subsequent PageRank, authority, strength etc.
If you have a self referring canonical tag, this is telling Google to treat that URL as the originator and to ignore any other subsequent versions that are created either by yourself on another page, your CMS through a dynamic URL or query string or by people taking the content from your site and posting it elsewhere. In essence, it should not effect PageRank or authority at all.
Hope this helps.
If the penalty is algorithmic, then it could be a penguin or unnatural links penalty. Alternatively, some of your links that were powering your site could have been devalued.
Just popped your site into OSE and it can only find 73 links. That isn't many at all, so one may argue that it was quite fortunate for your site to be ranking this highly to begin with. Looking at your who.is data, I can see the site is less than six months old - there is a "freshness" factor in the algorithm that will promote new content/sites to begin with - it could be that this factor has now worn off.
Having looked at your link profile, I can see that the majority of your links are directory links, with some article directory stuff in there like SelfGrowth. It's links like these that I imagine the algorithm would have devalued last week. Incidentally, yours is yet another site with alltop links that looks to have been devalued - I really think Google are going after these directories hard.
Now, just because you have been devalued doesn't mean you have been penalised - you wouldn't necessarily need to remove these links unless they formed unnatural anchor text, which really isn't the case with your site.
I feel for you mate because it's a very hard niche to do the normal inbound/content marketing for, as many people don't want that content on their sites, while other people just blast high PR paid links to their site to rank. My suggestion to you is to replace these directory links with new ones of a better quality, but in your particular circumstance, this is easier said then done.
Hope this helps a little bit - quite confident this is why your rankings have dropped, but getting them back might be a lot more difficult.