We've seen one of our sites jump from low 40's to 11 overnight after months of being low.
We're UK based as well, more a directory style site then e-commerce.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
We've seen one of our sites jump from low 40's to 11 overnight after months of being low.
We're UK based as well, more a directory style site then e-commerce.
Changeing any url to any other url without a proper 301 on the old url can have adverse effects, as old links then go to 404 pages. I wouldn't expect however there to be a gain or loss for that inconsistancy in your url structure, save in people linking to .html by mistake.
Also as a slightly less SEO related note, for best ux, try getting rid of the .filetypes entirely, and in the homepages case rid of the index too. This mostly helps with the homepage, reducing the odds of needing a 301 for the link to reach you correctly.
Looking at the footer of your website, I see "Powered by Communications3000 C3MS"
From a quick look of the site it links to I'm guessing it's something your webdevelopers have put together rather then an 'Off The shelf" cms so to speak.
I would recomend you find an SEO who knows both SEO and some .Net who could communicate with you and your developers to see if it is indeed impossable to do a 301, or if your developers are making excuses/misunderstanding the request/e.t.c.
That would probably give a better understanding of the situation.
As everyone else has said it /does/ sound rather odd that a 301 can't be implimented between the various ways of doing it.
.NET is not a CMS system. It is a language your CMS is written in. (As in you can have the same menu in english and also in french, you could in theory have the same CMS in .php and .NET).
You'd need to give the name of the CMS if you want advice on it's suitability.
Honestly, your developer should be able to 301 from URL a to b without loops based on the info provided so far.
We don't have a view all page(We found them so slow, so long, and with so meny links we had a notable improvement in rankings in general when switching to the quicker paginated versions). And other then the first page none of the other pages are currently in our site map.
I'm not entirely sure how that would stop gwt flagging it as a duplicate meta though. Less you imply to also no-index them.
From a navigation point of view, being able to erase the end of a url and end up at a parent is excelent for UX. As is not having to recall a file type (the .htm)
It wouldn't thus entirely surpise me if google favoured such a structure.
I expect however, google infurs such relationships from your onsite interlinking more. - Breadcumbs for example would probably have more effect. (I do belive there is a markup for them in webmaster tools, or at least was one being beta'd recently)
I personaly wouldn't do such a change less there were other issues being fixed at the same time. (Improving UX would count). Make sure of course to do you 301's and change the internal links if you do.
That scale of unique descriptions is well beyond our capacity. We're actually considering dropping the number of items per page too.
Thanks for the help.
Could ignore cause any problems? (such as pages that should/shouldn't be indexed) I was rather suprised to discover that using cannonical wasn't enough.
I'm currently working on a site, where there url structure which is something like: www.domain.com/catagory?page=4. With ~15 results per page.
The pages all canonical to www.domain.com/catagory, with rel next and rel prev to www.domain.com/catagory?page=5 and www.domain.com/catagory?page=3
Webmaster tools flags these all as duplicate meta descriptions, So I wondered if there is value in appending the page number to the end of the description, (as we have with the title for the same reason) or if I am using a sub-optimal url structure.
Any advice?
In that case, I've seen a few people try it with no notable diffrence. Pre-Penguin there were a few cases here were removing several instances of a keyword in the body seemed to dramaticaly improve rankings, but thats more removing keyword stuffing then optimising your page to apear unoptimised.
Right now, if your keyword can be there and it reads naturally, then I don't see much reason for it not to be there. In contrast, if you whole page is about blue widgets ad the heading /doesn't/ include blue widgets, you'll be confusing people. People also link using the heading/title occassionaly, so you should pull off a few genuinely natural links with that heading.
At least as far as penguin goes, it seems much more link anchor oriented right now.
Considering having h1 as the pages main heading and using h2-h6 for sub headings is proper html (or multiple h1s and sections in html5), I'd never stop doing it in hopes of getting an SEO advantage that may or may not lost with algo updates.
Most sites at the very least have a h1 as their main heading, theres nothing over-optimised about it unless you then keyword stuff it or something like that.
Basicaly, using a h1 for your main heading isn't an SEO tatic, it's what it's actually for.
For the most part, it borders dangeriously close to cloaking, and also runs the risk of IPs being very failable in terms of determining a location.
On the otherhand, some very minor text changes, with a default, would probably be fine if they improve the user experiance. Remember that google will only see the default version, so you're not getting any potential location based ranking.
If it's on a larger scale, I wouldn't bother and would work on loaction specific landing pages instead.
I would prerform the serch yourself in a chrome incognito browser and just looks at the serps.
Is there for exampe, a pack of 6 results taking up that first slot, essentaly making you poition 7+?
Are all the other results dis-similar to you? In which case you might not be what people are searching for.
If your getting impressions, are in the right slot, and theres nothing else odd, compare your meta description to the number 1. People might just be looking at you and thinking you are providing somethign else.
Meny directories are still providing link juice, and it tends to be the least spammy ones.
I would say the best rule of thumb is any directory you would sign up due to the potential value of traffic from it is going to be relitivly safe to go for as it will be providing real value and thus less likly to be crushed by google.
Directories for links sake is more risky.
I would presonaly allow the catagorys to be indexed, but make sure each section had a block of text as an introduction so it's not very thin or near duplicate content.
Then theres making sure no unwanted urls or pages get indexed. The search pages, user pages e.t.c might take some looking into to make sure your not egnerating 1000s of duplicate pages.
And it's just a quirk, but the url's read weirdly with forum.php in them. Rewriting them to just /forum/ might make them a little more usable.. Not entirly sure you will get SEO benifit out of it though.
If any of those sub pages had links, are ranking e.t.c., then you're definatly going to have to look into 301s at the very least.
Any more then that I would think it best to give a more specific example or link.
Normaly I hear without, but have not seen any conclusive studys. I normaly see the non - outranking the same phrase with, but thats antidotal evidence.
This is where I would go with best practices for usability/branding, where at the very least short urls tend to work better.
Having an exact match via -'s can also look spammy at times. Though that might just be because of the sheer number of spammy sites that do.
If your /just/ targeting the UK, all the advice I have seen, along with the expectation is a .co.uk.
Personaly, I would avoid -'s in the url just due to the sheer number of people who struggle to work out which is the right 'line thing'.
There's the manual request in Web Master Tools.
Though that they're indexed and not the 301 of the page seems odd. Make sure there is a crawlable link to each page somewhere in your site, perhaps even make sure they are still in your sitemap, and not blocked by robots.txt. That should allow google to re-crawl the pages and realise they have been 301.
Also check how the 301 is implimented. Make sure theres not some kind of masking that is redirecting users and not google. Also make sure it is a 301 and not a 302.
ISS has no problems doing 301s, and if you can use php, asp or anything similar you can just manualy put a 301 on each page if that fails.
No rel-canonical solution will result in all 3 sites ranking as far as I am aware.
Your best option is usualy one site with geo-located pages. If it has to be 3 sites, then the only real option is to make all that content unique, on unique ips e.t.c., which at the end of the day is 3X the work or more.
If you are going to do it, I would only do it with the link as your url or perhaps brand name. If it causes a massive exact match link profile you risk flagging as manipulaitive.
If you have lots of duplicate content, the chances are your url structure is causing it in which case the problem will return with every blog post, so deleting is probably not the answer.
Check the obvious things like is it caused by having a www. and non-www. version, if it accessable via diffrent capatalisation, and fix any related issues.
The other common things to check are tag and catagory pages which can easly end up looking exactly the same. Fix what you can before deleting the content, else you risk masking the problem.
If it's been more then a week, then I would contact the help team at help@seomoz.org
Usualy it's once a week it updates, but one time I did have a (quickly resolved) problem where the crawl didn't happon, so it's not unheard of.
In the meantime, if your site is relitivly small and you have anouther slot, try setting up a new campaign for the site, which will trigger a starter crawl on the new campaign. That will allow you to get some data quickly.
If you have moved the content elsewhere, 301 Permanent Redirect should be used, as it passes all(well most) the old pages link value e.t.c. to the new page
If it's temporarly elsewhere, 302
If it's gone for good and not coming back and theres nowhere apropriate to send people, consider 410 gone.
If you mean what status code should your missing pages/bad urls return, that should be 404, not 3XX. Redirecting the user to a 'error' page with a 301 that then returns 200 ok is a great way to have a lot of problems and little way of spotting them.
Is 95 hits a typo? Because if not your traffics dropped closer to 84%, and I would say it's penguin with a dead cert and advise reading some of the posts on recovery floating around.
Either way I would identify which keywords you have lost the most traffic on and review your link profile. Right now large % of exact match anchors seem to be a strong signal Penguin is in play.
I'd rel-canonical if you can, as theres still nothing stopping links to them being indexed. It might stop Rodger/Google from crawling them, but the potential indexation issues won't go away. Otherwise perhaps no-index them.
I'd usualy go as far to do re-prev and rel-next for the paginated searches as well.
Last I checked, Rodger obays robot.txt and meta-robot commands as he trys to simulate what google would crawl. If he can crawl those pages, google probably is as well.
I think if you rel-canonical them properly, or no-idex them properly e.t.c. It should show as a normal page.
First an aside, duplicate content is often a rather large problem, so i would try using some url re-writes or meta-robots to see if you can improve on that.
Other then that, the penguin update and other updates around it were largly aimed at inbound links. It's likly a lot of your links are of less value, or your link profile is now seeming 'less natural'
I would work out where you have lost the traffic (keyword rankings and what they were bringing in) and then see if you've gotten any per page penelties e.t.c.
If you got penalised for a major keyword for example, you may be able to restore your previous traffic levels.
If it's accross the bored ranking drops I would work on improving your link profile in general.
If it's a traffic drop with no ranking drops, your probably seeing a seasonal variation, or something has effected your clickthrough rates (is there a new effective competitor with a better adword or Organic meta description?)
I think thats most of the bases covered. Anything more precise then that would require some data analysis.
Oh, on too meny links, I wouldn't worry about it at all unless that page has vanished from the SERPS. 106 links isn't a huge amount on a blog.
There are meny crawl tools out there, but Xenu is the one I would recomend for this. It will crawl your site, and then at the end ask for your ftp details so it can specificaly check for ophaned pages. That should make the report you need.
*edit - here's the link: http://home.snafu.de/tilman/xenulink.html
I'd definatly avoid having unrelated keywords on a page, but if it's variations I would consider how much traffic each one gets, and either focus or make an extra page (presuming I have good, non-suplicate info for said page).
We've actualy had a lot of success recently by just targeting the most prominant in the titles/h1s, and having variations in h2's, body text, with a few quality links to each (most of the links are for brandname on the homepage). It looks a bit more natural that way.
Just one thing to check is 'when' your rankings dropped. If it was all in the same week/day, you might be being affected by the new panda/penguin e.t.c. updates, in which case there's been a lot of actional advice on the blog.
At a glance, I can see you appear to be targeting 3 keywords per page, perhaps focusing on one per page would help?
On the design I can say that it might (google aparently checks above the fold to see if your have a lot of ads there). Overall it's unlikly to make a diffrence if the content and structure stay the same, so if you can imporve your design and get more conversions I would always do so.
If you have nofollow on all the pages, there is a chance it is being caused because google can't follow any links to your pages tho crawl and update them with the no-index tag.
Try changing your links to noindex, follow.
Heres some 2010 data showing google /overall/ share going up http://www.adept-seo.co.uk/uk-search-engine-usage-statistics-2010.html
Last stat I heard from an SEO was about 90%+ use .co.uk, but I've never seen their source
As a way out finding out the comparision, you could identiify what causes people to not be re-directed correctly, and then work out what % of users that is (as that should be most of everyone unless your dealing with tech crowd)
That would also answer if it is possable or not to track the data.
On boosting your rankings If they're on .com and not showing the uk pages, then they're being treated as an international. In which case anything that would boost traffic from the US for example, would probably work on them as well. Namely Global relivence signals.
Of course, if your UK only, this might be somewhat difficult to do.
In theory google bot can(or will) undersand nested h1's in correct html5 So in the example below:
h1
h1
The latter H1 will be weighted like a H2. This is part of the html5 spec on how it's ment to be interprated if I recall right.
If that applys /now/ I'm not entirely sure.
Just a tangental note. Google trys to redirect you to .co.uk these days, that data there is also /very/ out of date, googles been going up constantly in the UK.
On to your (2nd) question, you sand to lose more from losing your UK listing then you stand to gain in relivence elsewhere. People from the UK who are serching on .com still have their location data unless they depersonalise, so I would suspect you'd only see an improvment on those who go out of their way to depresonalise their results, AND use google.com. As such i would recomend leaving it as targeting UK users.
Not entirely sure of a reliable way to track people from the UK who use .com. I would suspect a large portion of those are due to google incorectly resolving the ip e.t.c. to overseas, in which case you would get the same data.
Both sound risky. Either way you could end up with a lot of low quality links. If your as large as you are, I would hire a firm to work on content you can market organicaly and presurve your natural looking profile with all the unnatural links warnings going around.
They have definatly updated (check the seoMoz blog), however they have only updated the data to the end of febuary, so you might not see much diffrence.
I would expect their influence to drop, as it's abused more often to get poor quality sites in the rankings you would hope google would decrease the weighting.
I also wouldn't be suprised if they didn't change at all. As dropping them would effect the ease of correctly ranking brands for their own name.
I would look into the reactions of consumers who realise you own all the websites.
If it looks unnatural, you might have trust issues and effect conversion. If I for example didn't see a major brand trying to sell me a certain product in the surps, I would be wondering if theres something funny going on with them.
Might be a non-issue, but thats the only other 'trap' that comes to mind.
Quite the reverse, I would say not to do so. There is certainly no benifit to you, and in the case the algo decided your no-follow link was the more relivent one you would lose all the potential link power for that page to the other page.
No-Follow is supposed to be used only for links to sites you do not wish to be assossiated with, and paid links. Using it for internal navigation isn't something that has been common since page rank sculpting stoped working.
I think first you need to work out what numbers are actualy important to you, and research what is realistic.
PA, DA, PR, Traffic, Bounce rate, all are rather poor measures of anything other then 'bigger abitary numbers' without context. They're things you use as part of working out your progress towards the important numbers, not usually as goals themselves.
If your selling something, measure sales. See what else the SEO company can suggets you can measure, and see if those goals sound realistic and if they are important to you.
For us, the type of traffic makes a world of diffrence. Just the number means nothing. We were getting more and more traffic of people seeking for a very specific items, but no traffic for the general. Goal based metrics help differenciate between 'traffic' and 'People doing X which brings Y Value'
I would manualy visit all the pages that aparently have 4XX errors, and use a tool to extract the responce-headers.
You may have a simila rproblem to on I had. Due to the way a legacy site I worked with worked, there were places where the responce header was set manulay. I actualy ended up with 404 pages returning 200 and vis versa.
I think the crawl tets tool might be able to help you, it outputs a csv with some of the info. http://pro.seomoz.org/tools/crawl-test
One other way of looking at this, especaly if you have a short domain is that a shorter url uses up less of character limits on social sites, forum sigs, or any other senario where you might otherwise have to use a url shortener to post the link.
It's a slight benifit, but it may mean the diffrence between sharing yourname.com or goog.gl/code, the former of which is usualy prefurable for brand reconition at least.
If you havn't tryed it, there are options above the results, change the "to" feild from 'this page' to 'pages on this root domain'.
That should give all the links to your root domain.
If I remember there /was/ a good reason one way or the other for using cookieless domains and such to optimise image delivery e.t.c., it can only be done with your website on one and images on the other, but I can not remember which was around it was, and what senerio brings it about at the moment.
I prefur the www. version mostly due to all our competitors using it, so we look 'odd' when next to them. People expect to see the www.
Since most people leave them off, and most people link that way and it's easyer to set up I have site wide slash removals.
You get a 301 regardless if people link to you wrongly, so be carful when giving out the url and internal linking you use the right on either way.
I'm also not convinced that the article quoted is right in saying that the server does a 301 when you leave it off. It certainly has to do an extra lookup stage to find the right file(look for file, not find, look for directory with defualt document), but theres no 301 header returned.
I would investiage if there is any way to rewrite the whole lot with one rule, or auto-generate a 301 list. If your very sure your incoming link data is fresh and thats not possable then I would only 301 the pages with links.
Don't no-follow your navigation links, there is no benifit anymore to trying to sculpt pagerank in such a manner anymore. The potential link juice is 'spent' on the link regardless if it is sollowed or not, it's just that no-followed sites don't recive it. Save no-follow only for when your linking to sites you would not want to be assossiated with.
If you want pages out of the index, use meta noindex as opposed to robots.txt.
I would index your login page so people who are googling for it can find it.
On member profiles it would depend if they are reasonably valuable content or no. If it's 99% duplicated content then I would consider no-indexing them. If they have bios and such, then it's probably fine to leave them indexed.