We've seen one of our sites jump from low 40's to 11 overnight after months of being low.
We're UK based as well, more a directory style site then e-commerce.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Job Title: Director
Company: My Favourite Holiday Cottages Ltd
My Favourite Holiday Cottages UK
Favorite Thing about SEO
Getting it right without doing anything wrong!
We've seen one of our sites jump from low 40's to 11 overnight after months of being low.
We're UK based as well, more a directory style site then e-commerce.
Changeing any url to any other url without a proper 301 on the old url can have adverse effects, as old links then go to 404 pages. I wouldn't expect however there to be a gain or loss for that inconsistancy in your url structure, save in people linking to .html by mistake.
Also as a slightly less SEO related note, for best ux, try getting rid of the .filetypes entirely, and in the homepages case rid of the index too. This mostly helps with the homepage, reducing the odds of needing a 301 for the link to reach you correctly.
Looking at the footer of your website, I see "Powered by Communications3000 C3MS"
From a quick look of the site it links to I'm guessing it's something your webdevelopers have put together rather then an 'Off The shelf" cms so to speak.
I would recomend you find an SEO who knows both SEO and some .Net who could communicate with you and your developers to see if it is indeed impossable to do a 301, or if your developers are making excuses/misunderstanding the request/e.t.c.
That would probably give a better understanding of the situation.
As everyone else has said it /does/ sound rather odd that a 301 can't be implimented between the various ways of doing it.
.NET is not a CMS system. It is a language your CMS is written in. (As in you can have the same menu in english and also in french, you could in theory have the same CMS in .php and .NET).
You'd need to give the name of the CMS if you want advice on it's suitability.
Honestly, your developer should be able to 301 from URL a to b without loops based on the info provided so far.
We don't have a view all page(We found them so slow, so long, and with so meny links we had a notable improvement in rankings in general when switching to the quicker paginated versions). And other then the first page none of the other pages are currently in our site map.
I'm not entirely sure how that would stop gwt flagging it as a duplicate meta though. Less you imply to also no-index them.
From a navigation point of view, being able to erase the end of a url and end up at a parent is excelent for UX. As is not having to recall a file type (the .htm)
It wouldn't thus entirely surpise me if google favoured such a structure.
I expect however, google infurs such relationships from your onsite interlinking more. - Breadcumbs for example would probably have more effect. (I do belive there is a markup for them in webmaster tools, or at least was one being beta'd recently)
I personaly wouldn't do such a change less there were other issues being fixed at the same time. (Improving UX would count). Make sure of course to do you 301's and change the internal links if you do.
That scale of unique descriptions is well beyond our capacity. We're actually considering dropping the number of items per page too.
Thanks for the help.
Could ignore cause any problems? (such as pages that should/shouldn't be indexed) I was rather suprised to discover that using cannonical wasn't enough.
I'm currently working on a site, where there url structure which is something like: www.domain.com/catagory?page=4. With ~15 results per page.
The pages all canonical to www.domain.com/catagory, with rel next and rel prev to www.domain.com/catagory?page=5 and www.domain.com/catagory?page=3
Webmaster tools flags these all as duplicate meta descriptions, So I wondered if there is value in appending the page number to the end of the description, (as we have with the title for the same reason) or if I am using a sub-optimal url structure.
Any advice?
In that case, I've seen a few people try it with no notable diffrence. Pre-Penguin there were a few cases here were removing several instances of a keyword in the body seemed to dramaticaly improve rankings, but thats more removing keyword stuffing then optimising your page to apear unoptimised.
Right now, if your keyword can be there and it reads naturally, then I don't see much reason for it not to be there. In contrast, if you whole page is about blue widgets ad the heading /doesn't/ include blue widgets, you'll be confusing people. People also link using the heading/title occassionaly, so you should pull off a few genuinely natural links with that heading.
At least as far as penguin goes, it seems much more link anchor oriented right now.
I'd impliment rel=rev and rel=next on the pages to imply that their paginated, with the first page mentioned being the first in the chain.
rel=canonical then should point to the actual url, not the view-all page.
I think that is the 'correct' implimention for paginated content since rel=prev and rel=next were introduced.
If any of those sub pages had links, are ranking e.t.c., then you're definatly going to have to look into 301s at the very least.
Any more then that I would think it best to give a more specific example or link.
For a simple quick way, I would use a bulleted indented list. Works at a glance, is less demanding then a diagram.
Something like (taking your list as an example and moving all the level 2s up a level, I'm sure you don't want www.url.com/Home/About-us)
Google is definatly OK with this, Bing aparently might have issues, but the only way around that would be implimenting it for all the dupe pages but not the original (which is less trivial to detect, or impossable, and why google allows it)
Due to the nature of the objection (Bing claims your telling it that the page is a duplicate of itself, see the article John linked), I would actualy expect Bing to change that in the future to something more sensable if true.
Overall, I would impliment it on every page just to prevent all those links to it with random tracking paramiters e.t.c. that people could throw on.
Since most people leave them off, and most people link that way and it's easyer to set up I have site wide slash removals.
You get a 301 regardless if people link to you wrongly, so be carful when giving out the url and internal linking you use the right on either way.
I'm also not convinced that the article quoted is right in saying that the server does a 301 when you leave it off. It certainly has to do an extra lookup stage to find the right file(look for file, not find, look for directory with defualt document), but theres no 301 header returned.
Last I came accross such an issue I mostly started with making the 'easy' changes that reduced the number the most.
In the last case, it was implimenting a 301 to the www version of the site (cutting the errors in half) and putting a canonical on one search page.
This got the number down to the point where it was easyer to make decisions on 'Is it worth making friendlyer urls' and discover more intresting places dup content was being generated.
It's one of these things I would always aim for 0 where I can. It usualy means that the url or site structure can be improved sugnificantly, or it's such an easy fix that it's hard to justify not doing.
There are meny crawl tools out there, but Xenu is the one I would recomend for this. It will crawl your site, and then at the end ask for your ftp details so it can specificaly check for ophaned pages. That should make the report you need.
*edit - here's the link: http://home.snafu.de/tilman/xenulink.html
First an aside, duplicate content is often a rather large problem, so i would try using some url re-writes or meta-robots to see if you can improve on that.
Other then that, the penguin update and other updates around it were largly aimed at inbound links. It's likly a lot of your links are of less value, or your link profile is now seeming 'less natural'
I would work out where you have lost the traffic (keyword rankings and what they were bringing in) and then see if you've gotten any per page penelties e.t.c.
If you got penalised for a major keyword for example, you may be able to restore your previous traffic levels.
If it's accross the bored ranking drops I would work on improving your link profile in general.
If it's a traffic drop with no ranking drops, your probably seeing a seasonal variation, or something has effected your clickthrough rates (is there a new effective competitor with a better adword or Organic meta description?)
I think thats most of the bases covered. Anything more precise then that would require some data analysis.
Oh, on too meny links, I wouldn't worry about it at all unless that page has vanished from the SERPS. 106 links isn't a huge amount on a blog.
Changeing any url to any other url without a proper 301 on the old url can have adverse effects, as old links then go to 404 pages. I wouldn't expect however there to be a gain or loss for that inconsistancy in your url structure, save in people linking to .html by mistake.
Also as a slightly less SEO related note, for best ux, try getting rid of the .filetypes entirely, and in the homepages case rid of the index too. This mostly helps with the homepage, reducing the odds of needing a 301 for the link to reach you correctly.
I'd rel-canonical if you can, as theres still nothing stopping links to them being indexed. It might stop Rodger/Google from crawling them, but the potential indexation issues won't go away. Otherwise perhaps no-index them.
I'd usualy go as far to do re-prev and rel-next for the paginated searches as well.
Looks like your connection to Moz was lost, please wait while we try to reconnect.