I would go further than that and check the link profiles of all the domains. Any sign of spam, unnatural anchor text etc then do not redirect as you'll inject that problem into your site. Even if you believe them to be dormant and never used I would just check in case you were not the original owner. Always worth doing that check.
Best posts made by MickEdwards
-
RE: I have multiple URLs that redirect to the same website. Is this an issue?
-
RE: Best tools for an initial website health check?
ScreamingFrog gives all the data you want. Tools for the purpose of a creating a sleek report usually don't give the full picture. It's those issues you draw out yourself that makes a difference.
-
RE: Massive SERP crash
It's highly unlikely that a technical issue would drop rankings within a day or so, maybe drop from URLs from the index though. Check .htaccess, robots.txt and meta robots etc.
Firstly I would double check what is indexed and what header responses you are getting. Depending on the size of the site I would use http://intavant.com/tools/google-indexed-pages-extractor/. Compare with what you believe to be indexed and run the list through Screaming Frog to check the header responses for each URL.
Otherwise I would look at exactly what keywords have been hit, check GWT for any messages and do a thorough investigation of onsite content - Panda; and link profile - Penguin.
-
RE: Why such a high page rank with so low metrics in OSE
I mainly go with the information in [this post](http://www.seomoz.org/blog/internal-linking-strategies-for-2012-and-beyond#sts=Footer Links Are Not (Inherently) Bad). Footers are great for navigation but being hit for rich keyword exploitation. It seems most web designers follow that tactic though.
-
RE: Google smacked my site and dropped all rankings, can't find out why
Without delving too deeply it may be your link profile, especially if you have bombed in the past few days.
You have a lot of forum links. There are also directories and article submission sites. One site in particular A Seek is a database of suppliers for products searched for. If the product is not available there is no 404 page rather the same homepage view, so effectively you are part of what may look to Google as mass page 'advertising' as your link as well as all the other suppliers appears wholesale. These all make up the backbone of your link profile and if not the cause I would be very nervous leaving as is.
-
RE: Rel=canonical
rel=canonical needs to either go within the tags or the HTTP header.
http://googlewebmastercentral.blogspot.co.uk/2013/04/5-common-mistakes-with-relcanonical.html
-
RE: Using Google Analytics network service provider
Initially it depends who the visitor is. This data doesn't tell you that it's a worker or visitor just browsing from that IP and your site is pretty random.
If you analysed the data and found some strong trends it might be worth investigating further, but it might be a little bit 'big brother'. On the other hand you might want to contact those companies on a different footing, not mentioning the stats but knowing there is some kind of interest there - unless a competitor!
-
RE: Google smacked my site and dropped all rankings, can't find out why
yep, didn't even get to anchor text. Absolutely right, this has been spanked for all the reasons you can get spanked.
-
RE: Rel=canonical
http://www.domain.com is the same as www.domain.com. http is the protocol in which web pages are formatted and sent and would be part of any "complete" URL.
So to answer your question, if there are no redirects in place you can choose either the non www or www version for your canonical tags. However if you are looking to consolidate to a particular version I would look at updating your htaccess file to create a redirect rule from one version to the other.
#Force non-www to www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^example.com [NC]
RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301,NC]or
#Force www to non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.example.com [NC]
RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] -
RE: Direct traffic decline
- Has the level of (not provided) increased in rough proportion to the drop in branded traffic?
- Have you double checked you are ranking #1 (or same position) for all your branded keywords?
- Are alternative Adwords ads sitting above your results (apparently 40% of people don't know they are ads - which should now change with the distinct yellow 'Ad' icon)?
- Are you still strongly competing in your market?
- Is it just brand traffic, have you analysed other keywords?
- is your branding looking like 'keywords' used in anchor text - small chance Google might see it that way?
-
RE: Should I set up no index no follow on low quality pages?
As Ryan suggests you still want to FOLLOW rather than giving the bots a dead end as I notice your heading suggests no-follow.
-
RE: How do you compare results using different tools?
Sean gives some good points.
What I would also suggest is look for not just fancy keyword sales stuff, but what are they going to do with you under the hood, what's currently wrong with any of the on-page elements, how are they going to fix those, what impact are those changes likely to have on organic ranking.
Having a list of keywords and saying here are some keywords, here are the monthly search volumes, if you get to top 3 for x,y and z you will be getting 'this' amount of traffic is not enough. There is so much more, what strategy do they have for content development and more importantly how do they approach link development - what is their overall philosophy in these areas. Can they show you any evidence of overall growth of sites.
Easy to be seduced by keywords, but far more difficult to start over or come back from an algorithmic or manual penalty.
-
RE: May last year my sites orgainic listings, and therfore visitors, plumeted. Why?
It might tie in with either Penguin or Panda updates, although content is Panda and Penguin is links (URLs).
- Did you get a manual penalty notice in Webmaster Tools?
- Have you examined your link profile closely for obvious spammy, dubious, unrelated links?
- Are your inbound links heavy with keyword anchor text?
- Is your textual content unique and give value to your visitor?
- have you changed your meta content?
- is the drop relating to certain keywords
- how have rankings stacked up over the past 2yrs - have they fallen?
You need to examine closely so that all your work doesn't fall flat again.
-
RE: Google adwords destination link issue
The destination URL already includes the http:// so I assume http/:www.abcd.com is a typo on here.
It should make no difference whether you use the trailing slash or not and the data will be the same. https://www.en.adwords-community.com/t5/Advanced-Features/trailing-slash/td-p/434036
-
RE: Referral Traffic Issue
That's the thing. These sites are just random, out of context sites, with no ads. I've crawled them and they are just middle of the road blog type sites (no comments), with few outbound links. Really weird.
Where I can i've blocked in htaccess and hope if it is 'a problem' they get fed up before I do!
-
RE: University website outbound links issue
Totally agree with Dirk. If the links offer value to the visitor and if grouped are tightly and obviously associated with the theme of the page then they should always be left as is. If there is a hint of paid for they should be no-follow, but in essence having no-follow means you don't trust the page/site so why have it there in the first place (unless there is some commercial interest in having it there).
-
RE: On Google Analytics, Pages that were 301 redirected are still being crawled. What's the issue here?
Patrick is spot on. Also be aware that although you have done a 301 Google may not crawl immediately and the old page may still appear in the index (redirecting when you select it).
-
RE: What is the best way to take advantage of this keyword?
I don't totally agree that it truly doesn't matter. I'm strongly siding with Rand on this one - https://moz.com/blog/subdomains-vs-subfolders-rel-canonical-vs-301-how-to-structure-links-optimally-for-seo-whiteboard-friday
-
RE: How do you know if SEO factors are holding you back in rankings?
Secondary to John's response I would be meticulous in squeezing the very best in:
- all aspects of page speed
- ensuring only the URLs you want are indexed (no tags, query strings etc)
- no redirect chains
- reduce/remove 404s
- reduce/remove 301s
- sitemap is accurate and weighted correctly
- check the cached URLs (text only) are showing the text and not swamped by other stuff on the page.
There is other stuff but these sometimes throw open other issues to look at.
-
RE: Can I undo 301 redirects to purchase site
Yes, that is the key - what has happened to the pages on the server? You would hope that they still exist and you can revert. If the site is in tact you can just remove any 301 or 302 redirects and restore. It may be that Google has removed the URLs from the index but these would be crawled again once the redirects are sorted.
-
RE: 301 Redirects... Redirect all content at once or in increments?
The only time I could imagine staging is if the existing site is in a mess and there are spurious redirect chains already going on. So it might be worth working out the true 'original' URLs, remove the existing redirects, implement the new redirects, test and then mop up anything left over. But as I say I can only imagine in that kind of scenario.
-
RE: Using hreflang="en" instead of hreflang="en-gb"
From my understanding if you have hreflang=“en-gb” then that/those pages are targeted at the UK. If you wish to target any English speaking countries then you add hreflang=“en”. But if you wish to target specific English speaking countries then you'd use hreflang="en-ie", hreflang="en-gg" etc.
What you are doing is giving Google information, not a directive, as to what pages are targeted for where. Google could ignore and it's not a ranking solution. You are just giving Google the heads up of your intentions.
-
RE: Wrong redirect used
It sounds like you have done all the right things. I agree with Vettyy that you should use something like Screaming Frog to crawl all the old URLs just to double check there are no hanging 404 pages or missed http pages. Switching to 301 will take a few days to filter through, so you could run cache:domain.com in Google on your most important pages to monitor when they are being crawled. Also do you have a mix of http and https in Google in present? It may very well be something to just wait and monitor.
A good tool for sniffing URLs headers is Fiddler.
-
RE: Removing massive number of no index follow page that are not crawled
Personally I don't agree with setting internal filter URLs to nofollow. I set noindex as you have done and add the filter attributes to the Search Console > Crawl > URL Parameters.
For the option "Which URLs with this parameter should Googlebot crawl?" you can set "No URLs" (if the filters are uniform throughout the site).
"No URLs: Googlebot won't crawl any URLs containing this parameter. This is useful if your site uses many parameters to filter content. For example, telling Googlebot not to crawl URLs with less significant parameters such as
pricefrom
andpriceto
(likehttp://www.examples.com/search?category=shoe&brand=nike&color=red&size=5&pricefrom=10&priceto=1000
) can prevent the unnecessary crawling of content already available from a page without those parameters (likehttp://www.examples.com/search?category=shoe&brand=nike&color=red&size=5)"
-
RE: Robots.txt: how to exclude sub-directories correctly?
Install Yoast Wordpress SEO plugin and use that to restrict what is indexed and what is allowed in a sitemap.
-
RE: Will switching my domain cause SEO suicide?
If it is a new domain name there will be no domain authority and you will have to build up the site again from scratch. The 301s will help kick start the process but you will need to put in time and effort to restore your rankings.
-
Search function rendering cached pages incorrectly
On a category page the products are listed via/in connection with the search function on the site. Page source and front-end match as they should.
However when viewing a browser rendered version of a google cached page the URL for the product has changed from, as an example -
https://www.example.com/products/some-product
to
https://www.example.com/search/products/some-product
The source is a relative URL in the correct format, so therefore /search/ is added at browser rendering.
The developer insists that this is ok as the query string in the Google cache page result URL is triggering the behaviour, confusing the search function - all locally. I can see this but just wanted feedback that internally Google will only ever see the true source or will it's internal rendering mechanism possibly trigger similar behaviour?
-
RE: What is the best SEO way to categorize products on an ecommerce site
.../category-green/some-product.html
.../category-brand/some-product.html
.../category-widget/some-product.html.../some-product.html clean URL should exist and the canonical tag within the top three URLs should be pointing to this clean product URL. The 3 above can be crawled and indexed, but the canonical is assisting Google in understanding which is the correct product URL. A product can rightly exist in multiple categories.
If the platform is such that a clean product URL is not possible (urghh!), then a strategy needs to be developed to choose one of the category/product URLs as the canonical.
My preference though is to have a category structure and all product links coming from those category pages are clean product URLs in the first place, with self referencing canonicals.