A developer who tells you "W3C validation isn't important" is like a house builder telling you "Those small cracks in the walls are nothing to worry about"
George
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
A developer who tells you "W3C validation isn't important" is like a house builder telling you "Those small cracks in the walls are nothing to worry about"
George
Google has a policy for this - what you're doing is not advisable - you should be annotating the URLs. You can read the correct approach to take here: https://developers.google.com/webmasters/mobile-sites/mobile-seo/configurations/separate-urls
Hi,
You're far from being alone with the issues you described, but personally I wouldn't recommend what you're suggesting:
If I was you I'd disavow the spam links per Google's policy (https://support.google.com/webmasters/answer/2648487?hl=en), set up 301s to your new URLs and following a bit of patience, start your SEO afresh with a clean slate.
George
@methodicalweb
Hi,
I see a couple of assumptions in your question - I would say that having a "keyword rich domain" is becoming a less significant ranking factor in SERPs so I wouldn't base the migration of an existing website that performs pretty well on the potential of a new domain targetting certain KWs.
Secondl assumption is that your existing domain is ranking purely because it's older. There are likely to be other factors at play here - particularly backlinks.
However, I realise that you need to restructure the website and moving to a single domain with the complexes on subdirectories makes sense architecturally. You might well see a drop in rankings certainly in the meantime while you do this migration so if this is a key acquisiton channel, then investigate PPC options to bolster your traffic.
As for the 301 - I agree it makes sense to 301 to the complex subdirectory for a user, however in Webmaster Tools Google doesn't support the migration of one domain to the subdirectory of another domain. This means it won't be as seamless as if you migrate to the root of the new domain.
One way around this would be to redirect the old domain to the root domain, but provide very clear navigation on how to get to the relevant apartment complex to a user. As far as a user is concerned, I would see this as an acceptable solution.
George
Your site appears to be indexed OK, but your visibility is low. I checked that "money site" is a low competition keyword you should be ranking better for.
Taking a look at your backlink profile (opensiteexplorer.org), it appears that there are a ton of toxic links pointing to the domain. This is almost certainly going to affect your rankings through Google Penguin, unless someone's already gone through a stringent disavow process.
Before you launched a new site on this domain, was it vetted to see if your predecessors had done any link building badness?
George
Hi,
I've been badly burnt by agencies in the past offering "quality" link building services and have done quite a lot of work on dealing with a conundrum similar to yours. Here is my advice:
Good luck,
George
Personally I wouldn't rely just on robots.txt, as one accidental, public link to any of the pages (easier than you may think!) will result in Google indexing that subdomain page (it just won't be followed). This means that the page can get "stuck" in Google's index and to resolve it you would need to remove it using WMT (instructions here). If there were a lot of pages accidentally indexed, you would need to remove the robots.txt restriction so Google can crawl it, and put a noindex/nofollow tags on the page so Google drops it from its index.
To cut a long story short, I would do both Steps 1 and 2 outlined by Federico if you want to sleep easy at night :).
George
Hi Tanveer,
It's hard to answer your questions without seeing the raw data. I presume these are external rather than internal links, and that they are genuinely new as opposed to only just having been discovered. I would start with going to Webmaster Tools, downloading your latest links and having a look at where they are coming from.
There could be a number of reasons for this, and so there's no point me speculating and you're right to investigate further. Using a link profile checker such as cognitiveseo.com will give you a clearer idea on the quality of any new links you acquire.
Feel free to post more information if you need,
Regards
George
You're in luck because Matt Cutts covered at least part of this question quite recently, which you can read about/watch here: http://searchenginewatch.com/article/2308339/Matt-Cutts-Create-Unique-Meta-Descriptions-for-Your-Most-Important-Pages.
In short - you should hand craft the meta descriptions for your most valuable pages (i.e. the pages you want to rank high in SERPs) but it wouldn't be expected for every meta description on your site due to the amount of work involved.
Personally I think the variance between these auto generated descriptions is still too low and would look for other words to vary them by - for example the type of cruise, the savings, or activities offered on the cruise in each region.
You'll also want to bear in mind a similar problem you're likely to experience with the page Title, Headings and content.
George
Hi Lee,
The foundation site idea sounds like a real roundabout way of achieving organic traffic and hence sales - which from a high level I'm assuming is what you're trying to achieve. It would perhaps make more sense if you were going to use the Foundation site to drive referrals, or to use for PR, rather than solely for link equity purposes.
It wouldn't take much for Google to work out that the foundation site is a bit of a cynical attempt to gain rankings.
If I was you I'd focus on improving the content and linkability of your client's existing site and address some of the branding issues head on rather than side-stepping them with a sister website. You can incorporate the "foundation" idea into the existing website (perhaps on a subdomain or directory), which if done properly - with valuable content - will earn natural links and therefore gain far more organic value than having a sister website.
George
Hi Rich,
I can't imagine that Google would penalise your parent site because branch site domains 301 to it in this way. I gather from Matt Cutts that a significantly long chain of 301s (301->301->301->301 etc) might be frowned upon but that's not what you're proposing.
I'm making an assumption that your goal is to publicise the "friendly" .com domains on business cards / advertising so as users don't have to type in a long URL. You will most likely get links, but the 301 will pass on at least some of the value to the parent site. The only thing to consider perhaps is whether you plan to have other pages on the friendly domains which also need to 301, (e.g. www.DentalCareofLacey.com/contact)) in which case there will be an overhead in maintaining these.
You'll also want the parent site landing pages to be SEO optimised for their respective regional areas, but as you've already got the region in the URL I think you're probably on top of this.
George
It looks like this error is caused by a plugin you have installed and enabled on your wordpress site that probably isn't compatible with the version of wordpress you're running. If you disable the Backlinker plugin it will probably go away.
As for SEO impact - it appears to also have mangled your /robots.txt (which you should fix), and the user experience of seeing this error is poor and so it's worth fixing.
George
Hi,
I understand the question as: In the SSL (HTTPS) version of my homepage, should I add a rel=canonical link to the markup which points to the non SSL version of my homepage?
If your SSL pages are only accessible to authenticated users (i.e. not crawlers) then I can't see that it would make much difference as you won't suffer from duplicate content. However, if your SSL page is accessible to crawlers (as is becoming more common recently) then adding the canonical tag to non SSL is a good idea. In addition to preventing duplicate content issues, there's a good chance that your SSL page might get linked to, and blocking crawlers to it (using noindex / robots etc) means you won't get the benefits of those links.
One thing to bear in mind first is that you should decide on whether the single canonical version for your site is your HTTP or HTTPS pages. Then canonicalise accordingly.
George
The only thing I would add to the existing responses, is that if following a "site:www.mysite.com" query you notice that some key landing pages haven't been indexed then submit them via Webmaster Tools (Fetch as Google).
I would also make sure your sitemap is up to date and submitted via WMT too. It will also tell you how many of the sitemap URLs have been indexed.
These 2 things could speed up your re-indexing. My guess is that if it's a reputable site, and the migration of URLs was done properly, you'll probably get re-indexed quickly anyway.
George
Hi Justin,
Personally I think you'll be fine as you've described the initiative working. Google doesn't expect every link to a website to be from a high authority, otherwise it would look unnatural. In reality, there will be a mix of high and low authority pages/domains that link to every website. However, if the blog posts are being spun out on blog networks, or if your customers sites have been penalised by Google then it probably isn't going to help you much.
What isn't clear is whether you're effectively buying these links, and whether they will pass PageRank or not. I encourage you to read Google's guidelines on this: https://support.google.com/webmasters/answer/66356?hl=en.
George
Hi,
It was a very bold move to drop such a significant number of pages from your site, especially if they were built up over time and attracted links. Even if the content wasn't completely original, that's not to say it didn't have some value. I think if I had made such a major change to a website and saw rankings drop, I would probably have reversed the change but then it's not clear whether that's an available option. Since I don't know the full reasoning behind the decision I'll reserve any further judgement and try to answer your question.
Returning 404s is the "right" thing to do as those pages don't exist any more, though putting 301s to very similar content is preferable to keep the benefit of any backlinks. I sense there weren't many links to worry about though as you're not very positive about the content which was deleted!
Google will hold onto pages which return 404s for some time before removing them from its index. This is to be expected as web pages can break/disappear unintentionally and so you have a grace period to "fix" any issues before losing your traffic.
The fact that Moz isn't showing any 404s shows that you aren't linking to the deleted pages because they are not being picked up by the crawl. They will drop out of WMT in a few weeks where you haven't inserted 301s to existing pages. You should also double check that they've been removed from the sitemap you submitted to Google.
Hope that helps,
George
@methodicalweb
The fact you already know the toxic links and have link building experience suggests you're more than equipped to do it yourself. As already suggested, the best way is to use Google Merchant Centre Disavow - and this is exactly what a link removal company would do.
I would consider a month trial/paid of Open Site Explorer / Majestic to get all the links, though Webmaster tools also provides a free sample download of links too.
I'm sure it goes without saying but just make doubly sure that these links are toxic before you disavow them. If a link removal company slipped up and did this then there's a risk of them causing harm so there are advantages to you doing it yourself if you know what you're doing.
I've read conflicting studies about use of Trust badges. Sometimes they're a good idea as it instills a feeling of trust in a nervous/cautious customer but other times it can have a negative effect by scaring off a user who hadn't considered security/privacy to be an issue until you mentioned it! It depends on the level of technical/online shopping experience your customers have.
As Gregory says - test it - get them to give you a month free trial and see if it impacts your conversion.
George
@methodicalweb
Hi Jarrett,
Although the menus probably look different in your designs (an assumption on my part), the HTML looks identical on the link you provided (ULs/LIs). If the HTML is the same, then you'll use CSS to vary the appearance of them - specificially using the viewport on responsive mobile which is designed for exactly this scenario.
Perhaps I'm missing some other dev reason why it can't be done, but using ajax for this, even if you do attempt to block Google crawling it sounds like an over-engineered solution.
George
Hi Finnmoto,
You're in luck - it does use a 301 for that homepage redirect. The results of this test were brought to you by the mighty Fiddler (http://www.telerik.com/fiddler).
I've migrated pages like this before and it can take a bit of time for the dust to settle. Remember you've migrated an entire website to a new subdomain in one go and that takes time for Google and other services to process (depending on how authoritative your site is).
It's worth crawling your entire site's old URL structure with ScreamingFrog to check the redirects were implemented correctly.
Regards,
George
Personally I would include the company/brand name. If it isn't a blue chip then it does no harm to re-enforce your brand on SERPs, and if it is a blue chip then you potentially stand to increase click-through because of increased trust/recognition.
George
Hi,
I took a quick look at your site, sitemap and index status and only 25 urls in Google, but very many more in the sitemap.
What I couldn't work out is where your /item-details/ urls in the sitemap are linked to from your website? I can't get to them through buying -> catalogue. It won't help indexing status if they aren't being linked to from anywhere.
The biggest issue you have however is the way canonicals are set up on the problem pages. If you go to this page:
https://www.wilkinsons-auctioneers.co.uk/item-details/?ID=2710
It has the following canonical (without the id):
rel='canonical' href='https://www.wilkinsons-auctioneers.co.uk/item-details/' />
If you search on Google, that canonical URL is indexed, so if you fix this by adding the id to the canonical they should start to appear in SERPS.
You have exactly the same problem on your auctions pages. e.g. https://www.wilkinsons-auctioneers.co.uk/auction-items/?id=13&pagenum=51
Another point that will help you rank is to use friendlier / more descriptive URLs for the items.
Hope that helps
George
Hi Chiaryn,
Thanks - you've been really helpful! I had assumed that as the referrer wasn't in the Web UI (per WMT), it wasn't available anywhere. I'd also assumed it was a copywriting issue and not a product data issue.
Need to readdress my assumptions
George
Hi Sika,
What you're seeing isn't anything to be concerned about, and Moosa has already answered the cannibalisation part. I'd advise against tracking positions for individual keywords using a single tool from one week to the next. If you imagine there's a natural fluctuation of rankings and Moz is taking individual snapshots that might be up one day and down the next so they're only really meaningful when tracked over a longer period of time. You're also not taking into account the long tail - bunches of keywords that are similar which may not be fluctuating nearly as much as the one you're tracking.
The real indicator of where you rank should be the organic traffic to the page. I doubt that traffic will be fluctuating even nearly as much as the positions for the few keywords you are looking at.
As for algorithmic negative impact - you would probably see significant drops across multiple tracked keywords if this was the case - and those drops would be sustained until you diagnosed and fixed the problem.
Regards,
George
Hi,
From a code management point of view - as Peter says it's very common practice to split your CSS into different files as they are then much easier to manage and maintain. You can use a tool like Yahoo's YUI compressor to minify - as Bradley says - and aggregate (merge) these files.
From a web performance point of view, less files does not always mean better performance. Web browsers used to only download up to 2 files per domain, but now it's pretty standard for them to support 6 or more. See a browser breakdown for Max Connections and Connections per hostname here: http://www.browserscope.org/?category=network&v=top. I wouldn't recommend trying to split across 6 files, but you might find that if you have one massive CSS file it will download quicker when split up.
There is another disadvantage to having a single, CSS file in that you're not making the most of web browser caching. Every time you change any CSS, all users will have to download the entire file again. Again this may not be a problem for you, but something to bear in mind.
My advice would be to point Google Pagespeed at your website's key pages and act on as much as the feedback as possible: https://developers.google.com/speed/pagespeed/. It is a fantastic resource and presents its findings very clearly.
George
@methodicalweb
I would throw HTTP 410s for them all if they don't get traffic. 410 carries a little bit more weight than 404s and we're not talking about a small number of pages here. I wouldn't redirect them to the homepage as you'll almost certainly get a ton of "soft 404s" in WMT if done all at once.
Matt Cutts on 404 vs 410: https://www.youtube.com/watch?v=xp5Nf8ANfOw
If they are getting traffic, then it'll be a harder job to unpick the pages that have value.
George
SearchMetrics would be a good place to start - you won't get individual keyword historic performance but it will show your website's overall SEO visibility over time. Particularly useful for tying in with Google algorithmic updates.
George
Hi Aaron,
The search experience on the website is a bit unconventional in that you search for a company name and it returns pages of results alphabetically listed with the name you are searching for hopefully in there somewhere!
You could make changes to the pagination using rel=next/previous, but what you're displaying isn't really "true" results pagination. I would therefore be cautious about changing it if the site is ranking well.
Canonicals would only be required if you were showing the same content on different URLs. A quick "site:" search like the below only returns one result, so either Google isn't showing the duplicate URLs (very likely given your question) or it isn't a problem for you:
site:www.formationsdirect.com inurl:companysearchlist.aspx?name=AMNA+CONSTRUCTION+LTD
You can look in webmaster tools to see which query string parameters it is picking up and configure the behaviour you want GoogleBot to take. You can also get some sense of the duplication if it is an issue.
Regarding the company page URL you gave, anything after the # in the URL won't get crawled so you don't need to worry about canonicalising those.
Again, if it's ranking well, be very careful about trying to solve a problem that doesn't exist. If you can find duplicate content then definitely redirect or canonicalise it and see what kind of impact it has. I would do this before taking on anything more significant like the website information architecture and navigation.
George
Hi Glenn,
Assuming you're on IIS7, I think these are the steps you are looking for:
http://technet.microsoft.com/en-us/library/cc732930(v=ws.10).aspx.
I strongly recommend you test the redirect configuration out on a test server first if you don't have a great deal of IIS experience.
George
@methodicalweb
Hi there,
You're describing quite a common frustration with Google Seller Ratings not appearing. The official line from Google can be found here: https://support.google.com/merchants/answer/190657?hl=en. The excerpt that is relevant to you is as follows:
Reviews are not added to Google Shopping results in real-time, so you may notice a delay between receiving a new review and its addition to your rating in the Google Shopping results. The same is true when a review is removed from a seller rating website. If you notice your reviews have stopped appearing, please make sure that the store name and registered domain match in your Google Merchant Center account and third party seller rating websites. Learn more about how to update the store name and website URL in the Google Merchant Center account settings.
For Google Shopping star ratings to appear, typically your business needs at least 30 unique seller reviews, each from the past 12 months. However, we may show ratings for merchants with fewer than 30 reviews if we have sufficient data from other sources to determine an accurate rating.
George
@methodicalweb
Hi Aaron,
First off, since your rankings haven't been affected I would definitely hold off changing anything in WMT unless you're sure as it might cause more harm than good. If you paginate what looks like potentially thousands of pages I'm not convince Google will look on this fondly. The URLs will probably also change regularly as more companies are incorporated because the pages are set to show fixed list lengths.
Resolving the duplicate content onsite is definitely the best course of action. The fact that Moz is crawling these duplicate pages indicates that it's picking up links from somewhere on your site. If you are able to stop exposing these links and only linking to the "preferred version" i.e. canonical then this will give you some control and a better understanding of the site's information architecture.
Regarding setting up of canonicals, I suspect that this will be a harder job as of the 3 duplicate URLs you provide, it's not immediately clear which one would be the canonical. There are probably also thousands of instances similar to this duplicate group across other company lists and Google will have picked at random which one it sees as the canonical on each one. Marking another URL in the group as the canonical stands to (at least temporarily) cause a drop in rankings and SEO visibility if done across thousands of pages simultaneously.
If I was you and I felt compelled to address the issue I would pick a sample ~10% of the duplicate groups, set a canonical on each of them and see what happens in terms of rankings over 3-6 weeks. I would also add the canonicals to a sitemap and try update any links on your website to make sure only the canonical is referenced.
It's risky though, as your rankings are good even though I understand the principle of what you're trying to achieve. When I've tended to do things like this it's when a website has had nothing to lose.
George
Hi,
This is quite hard to diagnose without seeing the actual page content but I can give you some pointers:
1. It sounds like the textual variance between the core gallery page and the individual category pages is too low. If you want to rank for the individual category gallery pages then consider writing a different paragraph on each page (100-200 words) to vary them as well as varying the title, description, headings and anything else that you can. Google will then index all of the pages and they won't have duplicate content.
2. If you don't need to rank for the category pages, and want to keep the content the same (apart from the images), then consider using a rel=canonical from the category pages back to the core gallery page. Google will then only index the core gallery page and you don't need to worry about the content being duplicated. Moz should honour the use of rel=canonical and not report duplicate content any more.
George
@methodicalweb
Hi,
I'm looking to revamp the fortunes of an ailing Fashion ECommerce blog, which once had an impact on SEO for the site which it linked to but now has fallen by the wayside.
Blog sits here: www.mydomain.com/blog and links to products and categories on the ECommerce site www.mydomain.com.
The blog has about 2000 posts on it written over the past 5 years, which are almost all rewritten content about existing stories, events or embedded youtube videos related to fashion on the Web. None of the blog topics are unique, but the posts have been rewritten well and in an entertaining way - i.e. it's not just a copy and paste.
The blog is written on an old, proprietary platform and only has basic Social sharing. You can't comment on posts, or see "most popular" posts or tag clouds etc. It is optimised for SEO though, with fashion category tags, date archives and friendly URLs.
The company badly needs a shot in the arm for its content marketing efforts - so we're looking into the creation of infographics and other types of high quality, sharable content with an outreach effort. Ideally I want this content to be hosted on the Ecommerce site, but am faced with a few options which I'd appreciate the community's view on:
How I should handle the mix of the legacy content on /blog and the addition of new, "high quality" content?
Thanks in advance,
George
Yes it's the worst possible scenario that they basically get trapped in SERPs. Google won't then crawl them until you allow the crawling, then set noindex (to remove from SERPS) and then add nofollow,noindex back on to keep them out of SERPs and to stop Google following any links on them.
Configuring URL parameters again is just a directive regarding the crawl and doesn't affect indexing status to the best of my knowledge.
In my experience, noindex is bulletproof but nofollow / robots.txt is very often misunderstood and can lead to a lot of problems as a result. Some SEOs think they can be clever in crafting the flow of PageRank through a site. The unsurprising reality is that Google just does what it wants.
George
I agree with the 2 responses above.
Your blog is probably ranking because it has more links/shares (or at least more recent links/shares) and potentially more relevant content. You should try improving the content on the laptops products page.
You should also make it very easy for someone visiting your blog to get to your products to purchase them. If the blog is well written and useful/engaging, this might be a good opportunity for your content marketing to form a key part of your customer journey.
Hi Rafal,
The key part of that statement is "we might still find and index information about disallowed URLs...". If you read the next sentence it says: "As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results".
If you look at moz.com/robots.txt you'll see an entry for:
Disallow: /pages/search_results*
But if you search this on Google:
site:moz.com/pages/search_results
You'll find there are 20 results in the index.
I used to agree with you, until I found out the hard way that if Google finds a link, regardless of whether it's in robots.txt or not it can put it in the index and it will remain there until you remove the nofollow restriction and noindex it, or remove it from the index using webmaster tools.
George
I bumped SEO-Buzz's answer
In practice, sometimes it's impossible to write unique product descriptions. I've worked with websites that have over 15K products, with around 5K changing every quarter. To keep on top of that you'd need an army of copywriters.
In that situation I would recommend doing the following:
1. Make sure your category / hub page content is awesome / unique / relevant. These will then be your main landing pages.
2. Pick key product sections - based on high margin / good stock availability / competitive pricing and update the product descriptions for them. If you client sees improved sales as a result, they will probably roll out this strategy to the rest of the site.
Hope that helps
George
Hi Graeme,
For old product pages - your solution is good regarding showing users alternatives to the out of stock products. No need for an "out of stock page" as there's no value in that for crawlers or users. Regarding point 2 - if you redirect discontinued product pages to category pages that should be fine although Google may regard that as a soft 404. If there are loads of products like this and you 301 them in one go then the chances are it will flag up in Google WMT. If there are a small number and you introduce them gradually then you'll probably be fine.
For the crawl errors question, adding value to the pages in terms of related products is a good solution if that's viable and the pages will be different enough from each other (i.e. no duplicate content). One thing that isn't clear at the moment is if you're redirecting empty category pages all to the homepage - or if it's possible to redirect or canonical them to their parent category.
e.g. For home -> clothing -> men's clothing -> shoes
If all the men's shoes are discontinued, then redirect that page to men's clothing rather than to the homepage. This reduces your chances of getting a soft 404, and is also arguably a better user experience.
Hope that helps,
George
Yes this is a good idea as it's a catch all for URLs that might include tracking URL parameters, or other parameters that don't affect the page content. When there are no tracking parameters, it's going to be more development and testing work to hide the canonical, when having it there doesn't cause any issues. It's also quite a brutal but effective catch all if your page was accidentally accessible via other URLs - e.g. non-www or https.
George
No need to be concerned. Aside from all the really well documented best practices on canonicals, in your original question you've spotted at least one big site that does this. They pay the SEO big bucks and rank well.
I've not come across any reason ever that would give cause to be concerned about losing Page Authority by having a page canonical to itself.
Hi,
This isn't the best forum for this question as it's about IIS configuration - you'd be best off hitting up Windows Server configuration forums.
It is possible to do what you want to do. Dynamic (application) content in IIS needs to run under an application pool in order to be processed. You do this by creating an application under the website in the IIS manager.
Static content typically should sit under a different virtual directory (doesn't need an application). This means you can set it to be cached to improve page load time for users.
My advice would be to go back to the developer and look at his dev server set up, then copy it for your live server. Sorry it's hard to give you any more advice without a lot more information on the environment and code.
George
Hi,
You'll need to provide the site details if you need help in diagnosing a penalty.
As a starting point I would log into Google Webmaster Tools to see if a manual penalty has been applied, I would also look in Analytics to see if your organic traffic overall has dropped across other pages on your website.
An algorithmic penalty is harder to diagnose, but can usually be recognised by aligning traffic drops with dates of Google algorithm releases.
George