You also need to look at the perceived quality of the links to the 2 pages. Tesco has more but not as valuable as Poundstretcher.
Best posts made by MickEdwards
-
RE: Why are they ranking?
-
RE: Merging 11 community sites into 1 regional site
I would try and map each URL of each community site to a corresponding URL within the regional site where possible, rather than pass all link juice to just the homepage.
Something like:
RewriteCond %{HTTP_HOST} ^www.communityx.com$ [NC]
RewriteRule ^/some-directory/$ http://www.regionaly.com/new-directory{ [R=301,L] -
RE: Exclude status codes in Screaming Frog
I don't think you can filter out on response codes.
However, first I would ensure you are running the right version of Java if you are on a 64bit machine. The 32bit version functions but you cannot increase the memory allocation which is why you could be running into problems. Take a look at http://www.screamingfrog.co.uk/seo-spider/user-guide/general/ under Memory.
-
RE: How To SEO Sinhala Teledramas Keyword In Google
The first way is not to spam your keyword on here.
-
RE: Hiding body copy with a 'read more' drop down option
You want Settings >> Show Advanced Settings >> (Privacy) Content Settings >> (Javascript) Do not allow any site to run javascript >> Finished.
Reload the site and check what you can see, or open up.
-
RE: Robots.txt: how to exclude sub-directories correctly?
I've always stuck to Disallow and followed -
"This is currently a bit awkward, as there is no "Allow" field. The easy way is to put all files to be disallowed into a separate directory, say "stuff", and leave the one file in the level above this directory:"
http://www.robotstxt.org/robotstxt.html
From https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt this seems contradictory
|
/*
| equivalent to / | equivalent to / | Equivalent to "/" -- the trailing wildcard is ignored. |I think this post will be very useful for you - http://moz.com/community/q/allow-or-disallow-first-in-robots-txt
-
RE: Existing content & 301 redirects
Yes a toxic domain will infect a new domain if 301 is implemented.
I would lean towards cleaning up the existing domain. Even if you end up disavowing every linking domain the existing domain is likely to have created more trust than starting from scratch. If it is a manual penalty ensure you document all steps to try and clean up so you can detail in the reconsideration request.
-
RE: How can I filter peoples names out in an Adwords campaign?
You can add negative keywords to your campaign. If you are aware of the names to remove add them straight in, otherwise it might have to be a bit of research and monitoring the keywords used to trigger the ads to work out what to add as negative keywords.
-
RE: Hiding body copy with a 'read more' drop down option
yep, sound good.
I was working on a site last year and they switched a DNN module based on your scenario without letting me know, having already tested the existing module. First I saw was when rankings and traffic wobbled. In this case the text was lost in the javascript and accounted for about 25-30% of content on all their main pages. Nightmare!
-
RE: Migrating EMD to brand name domain. Risk of Penguin Penalty?
It probably depends on the anchor. If the EMD was bluewidgets.com, then 'http://www.'bluewidgets.com' or 'bluewidgets.com' or 'bluewidgets' type anchor should be ok. If there was a high level of 'blue widgets' I would start to be a little concerned.
-
RE: Exclude status codes in Screaming Frog
Took another look, also looked at documentation/online and don't see any way to exclude URLs from crawl based on response codes. As I see it you would only want to exclude on name or directory as response code is likely to be random throughout a site and impede a thorough crawl.
-
RE: Negative Keyword Help
A negative exact match should not exclude your search term. For example:
**-[online payment] **will allow online payment gateway to show.
Check this post by Google and look at the chart of possible combinations - http://adwords.blogspot.co.uk/2007/11/adwords-optimization-tips-more-on.html
-
RE: Hiding body copy with a 'read more' drop down option
I've just had fresh content crawled and indexed that is in this scenario. Basically we are saying to the visitor "if you really want to know some more boring technical information then expand this, but we don't want to spoil your experience by vomiting all the data at you at once". Crazy if that is changed.
-
RE: Link exchanges of specific blogs work if relevant?
there is a good answer to this here - http://moz.com/community/q/reciprocal-links-4
-
RE: Help creating a 301 redirect in my htaccess file
For the directories I think you need to go down the rewrite route. This isn't probably functioning correctly but you get the principle.
RewriteCond %{THE_REQUEST} ^GET\ /category/
RewriteCond %{REQUEST_URI} !^/category/sub1/
RewriteCond %{REQUEST_URI} !^/category/sub2/
RewriteRule ^category/(.*) http://www.newdomain.com/category/$1 [L,R=301]RewriteCond %{THE_REQUEST} ^GET\ /category/sub1
RewriteRule ^category/sub1(.*) http://www.newdomain.com/category/sub1/$1 [L,R=301]RewriteCond %{THE_REQUEST} ^GET\ /category/sub2
RewriteRule ^category/sub2(.*) http://www.newdomain.com/category/sub2/$1 [L,R=301] -
RE: PPC Adwords Trademark Protection
I would give them a call. A ticket is then raised and they are usually very helpful.
-
RE: Harms of hidden categories on SEO
You could legitimately have hidden content in terms of category/URLs - maybe only made available if you sign up as an example. The guidelines are referring to manipulating on page content so there are elements within that page that are hidden and by default manipulating.
-
RE: Incoming links which don't exists...
As soon as you place a link on a site that is site wide you have to wait potentially months for Google to crawl every page on that site, which is why GWT might still be showing the detail. GWT is not the decider as to whether Google takes some action.
As long as you have rectified with either a no-follow or removal you are fine and have done the right thing.
-
RE: Why blocking a subfolder dropped indexed pages with 10%?
Maybe there are multiple URL variations created. For example, URL parameters, which will create multiple URLs to be indexed in Google.
-
RE: Does Google count the domain name in its 115-character "ideal" URL length?
I have understood the length of the URL to be calculated post http:// unless it's https://.
Just the way it is displayed in the results.
-
Staff??
Apparently on my last question my profile status says Staff? Is there something I should know??
-
RE: Robots.txt & Duplicate Content
There are 2 distinct possible issues here
1. Search results are creating duplicate content
2. Search results are creating lots of thin content
You want to give the user every possibility of finding your products, but you don't want those search results indexed because you should already have your source product page indexed and aiming to rank well. If not see last paragraph.
I slightly misread your post and took the URLs to be purely filtered. You should add disallow /catalogsearch to your robots.txt and if any are indexed you can remove the directory in Webmaster Tools > Google Index > Remove URLs > Reason: Remove Directory. This from Google - http://www.mattcutts.com/blog/search-results-in-search-results/
If your site has any other parameters not in that directory you can add them in Webmaster Tools > Crawl > URL Parameters > Let Googlebot Decide. Google will understand they are not the main URLs and treat them accordingly.
As a side issue with your search results it would be a good idea to analyse them in Analytics. You might find you have a trend, maybe something searched for or not the perfect match for the returned result, where you can create new more targeted content.
-
RE: Will links be counted?
Just as menu options I don't see why those links would not be crawled. What you can have problems with is if those elements expanded to reveal text and then it all depends how those elements are coded. If there are controlled by javascript there is a possibility Google will not read.
So in that case using your example: googling cache:http://www.tesco.com or disabling .js in the browser will indicate what can be crawled.
-
RE: Adding Nofollow tag
If I understand you correctly and you want to stop a particular URL from being indexed you would be best using the no-index meta tag within the head section of that page:
You probably want the search engines to follow the links on the page even if the page should not be indexed, so add the 'follow'.
-
RE: Remove Google penalty or make a new website, Which is better??
If you know it is backlinks then you should only need to cleanse your profile. Manual - submit a reconsideration request; algorithmic - wait for an update.
If you do use a new domain you have no age, no authority and ranking will take time. Being local you have to ensure every source that contains NAP is updated with the new domain as you cannot redirect if you suspect there is any infection within the existing profile.
All I am saying is that I don't know your domain or scenario, but just ditching to a new domain may give you more headaches than you envisage and should definitely not be a knee jerk reaction to a slow recovery.
Then as mentioned there is a trust thing with your visitors changing domain, especially if it is branded. If you do go down that road I would get some business objective information out there as to why the change in name (on the site, social etc).
-
RE: What are the lowest acceptable metrics for a link?
Chasing the domain authority for links is not really as straight forward as that.
- Domain has DA of 50 but not really related, is starting to crack with poor content etc
- Domain has DA of 15 is spot on in terms of directly related content and quality. It's a new site and looks like it will develop.
- A good link profile will have a natural mix of low to high DA, with an upturn for the most common 30-50 DA. This will include a good sprinkling of no-follow. When researching I tend to filter 25+ but keep an open mind on everything.
For cleaning up a profile you can't look at those measurements, you need to go into each site and manually check out it's content and history, its own link profile and make a judgement if you are in the right neighbourhood. Of course directories, 'comments' links and badly placed 'articles', thinly veiled paid for (do-follow) are much easier to weed out.
-
RE: How to Redirect Old Domain to a New Domain?
If the aged domain has links going deeper than the homepage you should look at redirecting those pages/directories directly to their associated pages/directories on the new site. Otherwise all passed link equity will go to the homepage.
-
RE: How to Fix 404 Errors
I have run the site through ScreamingFrog. You have masses and masses of 301 redirects going on which you need to resolve. It looks like you could have some permalinks issue going on. What do have set in Settings >> Permalinks? I'd also look at installing Yoast Wordpress SEO as I recall there are options in there to handle permalinks as well.
-
RE: Changing the XML Sitemap address
Just ensure you update robots.txt with the new address. Re-submit the new sitemap to the search engines. Should be as simple as that. Your URLs have not changed.
Your sitemap does not determine the indexing of all URLs but gives the search engines a good idea what to crawl, on top of being able to crawl from URL to URL on the site. It mainly is to help you see what has been indexed in comparison to what you have submitted. Unfortunately Webmaster Tools is not fully to speed anyway so may not show the same results as site:some-domain.com.
-
RE: Homepage 301 and SEO Help
If you have changed servers I would double check a few things. This is stuff to rule out:
- Page load speed should be good
- All your pages are still indexed in Google
- There are no warnings in Google Webmaster Tools
- Have a look through your link profile - if the majority of links are to non www then the 301s will shave an edge off the link value from those links. You are right to do the 301, so if this the case see if you can update those links.
- Have you implemented any other internal 301s to new URLs
-
RE: SEO Effect of Outbound Links
I agree with Don. Google ultimately wants to see that not only are you an authority in your field, you go out of your way to give your visitor as much information as you can by means of linking to further resources. Although personally I would avoid just dumping in Wikipedia links as they can be viewed as thinly veiled lazy links.
-
RE: How long after disallowing Googlebot from crawling a domain until those pages drop out of their index?
If it is the case that no URL for .us should exist (there are not new URLs) then you can remove pretty swiftly in Webmaster Tools >> Google Index >> Remove URLs >> select the root URL and select to remove all directories that come from it.
-
RE: Threatening SEO practice
I would monitor the links in OSE. Any whiff of dubious links disavow the domain, any more disavow the new domain etc, etc. They will soon get tired of wasting their resources, if indeed they carry out their threat in the first place.
-
RE: How to Check 301 Done Properly on Homepage
Another good tool is Fiddler for sniffing all header responses.
-
RE: Website completely delisted - reasons?
Looks like you have been injected with spam links. Used MajesticSEO to show the attached (sample).
I'd be assertive in getting into GWT asap. I guess those URLs don't exist but it is still a big no.
http://i.imgur.com/XFNY1ZG.gif
If you go with the client disavow that domain for sure.
-
RE: Homepage 301 and SEO Help
This is not implemented. You have both www and non www URLs along with /index.php for the homepage. This needs to be implemented in your htaccess file with something like (ensure you backup existing file):
Redirect non-www urls to www
RewriteEngine on
RewriteCond %{HTTP_HOST} !^www.4cabling.com.au
RewriteRule (.*) http://www.4cabling.com.au/$1 [R=301,L]Remove index from URL
RewriteRule ^(.*)index.(htm|html|php)$ http://%{HTTP_HOST}/$1 [R=301,L]
Your page load is woeful and you need to address - https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2F4cabling.com.au%2F&tab=desktop
Also, you have not got canonical tags set up for each page which I would recommend
-
RE: Inbound link cleanup and management
I would say in general you could forget about no-follow. However, and some may disagree with me, I look at those no-follow and if they are screaming dire spam, have a dubious nature and there is volume I treat them just the same as any other link. Google wants to see a rounded link profile so there will be a bunch of no-follow links in there. If those links are bad i'd want to let Google know that they are nothing to do with me. If there is a very small proportion of poor no-follow links then that is probably ok as again that is probably quite natural.
-
RE: Website completely delisted - reasons?
Without knowing the domain or any information in GWT nobody is going to be able to really help.
The only think I can think of is possibly .htaccess blocking Google.
-
RE: Are Directories Dead?
IMHO directories in the main are dead. Where they do succeed is when they are very niche and actually human managed. If you have the skills and passion in your niche then there is a good possibility you will make it work and become a valuable resource. Using NAP in the detail will provide valuable citation for the business listing. SEO impact will be minimal, but for the business it is still a movement in the right direction.
You'll need to heavily market the directory to the business customer base as an invaluable one stop shop for their needs. That will then drive referral traffic to the listings and allow you to charge a reasonable fee for inclusion.
-
RE: JS loading blocker
Sorry for the delay. I got sidetracked on another project and this client decided they would leave .js as is for the time being so I have not really tested. Initially I couldn't get the Chrome ext to do what I wanted and need to look at Firefox.
-
RE: Inbound link cleanup and management
Personally I wouldn't be worrying about those links. I just meant wholesale links from no doubt spammed domains that are disproportionate to your whole profile. I would just continue building good links to the site.
-
RE: New URL Structure caused virtually All rankings to drop 5 to 10 positions in latest report ?.. Is this normal
As soon as you change URL structure you are creating new URLs. So effectively you are giving Google a new site to work with. The 301's and current link profile will help to pull those new URLs into shape for Google and to have a drop is expected.
I would be focusing on getting some more quality links coming in and create new content to help the recovery process and move into a stronger position.
-
RE: Location based IP Redirect cuasing Google Search Issue
Have you implemented hreflang to all domains and URLs to indicate to Google your intentions with the domains/URLs?
Also have you checked Analyics >> Audience >> Geo >> Location of www.example.com.au for a period prior to your implementation to see if you have excluded a section of traffic? I know you said ranking but maybe you refer to traffic.
-
RE: Treatment of domain names in content that are not actually a link
Yes I absolutely agree with EGOL. If the domain/page authority is good I would rather have a do-follow link. But on the other hand if that is not possible and the brand/domain is mentioned in context then I would be happy that is a signal for Google. Maybe small but still part of the mix.
-
RE: Best tools for an initial website health check?
I second ScreamingFrog. Nothing else comes close.
-
RE: Rankings Tanked since new Site redesign land new url Structure ? Anything Glaringly Obvious I need to check ?
I'd look at your link profile. You have a strong connection to weddings which is not your niche, along with suspect directory listings.