Small update - if I compare the results of both old & new site - the order of the results seems identical (don't know if the order of the results was different before migration)
Best posts made by DirkC
-
RE: 301'd site, but new site is not getting picked up in google.
-
RE: Quickest way to deindex large parts of a website
Hi,
There was a similar question a few days ago: https://moz.com/community/q/is-there-a-limit-to-how-many-urls-you-can-put-in-a-robots-txt-file
Quote: Google Webmaster Tools has a great tool for this. If you go into WMT and select "Google index", then "remove URLs". You can use regex to remove a large batch of URLs then block them in robots.txt to make sure they stay out of the index.
Dirk
-
RE: New Global Company website launch question
Several elements here:
1. Migrating from subdomain to main domain - Moz "official" line is that this should be beneficial (check Rand's WBF on this https://moz.com/blog/subdomains-vs-subfolders-rel-canonical-vs-301-how-to-structure-links-optimally-for-seo-whiteboard-friday)
2. If you change your url's - 301's are still your best option. It's possible that it's not transferring all link equity - but it's still the best option you get. Watch out for redirect chains however. You might find this post interesting - also check the comments below: https://moz.com/blog/accidental-seo-tests-how-301-redirects-are-likely-impacting-your-brand. From personal experience - I have migrated several big sites to new platforms - completely changing the url structure and in most case you couldn't even notice the smallest glitch in Google Analytics. So it's not a given thing that your rankings/traffic will drop after redirects.
3. You mention that you also are doing a re-design - improving both design and content. This is the big unknown. Question is always if your carefully designed new pages are going to please your visitors. In the migrations cases where we had a drop of traffic this was very often related to the redesign - lower pageviews/visit, higher bounce rate - lower time on site. Few days after migration - traffic dropped (and it took indeed a few months to regain the original traffic/rankings)
4. Incoming links. I agree with Dmitrii: in an ideal world you should reach out to all the webmasters and they would kindly update the old links to the new ones. Realty is that some of them heard some SEO stories that outgoing links to other sites could get you punished by Google - and that it's always better to put non follow links or remove all the links to commercial parties. I fear that the risk of losing links here is bigger that gain.
Hope this helps.
Dirk
-
RE: 301'd site, but new site is not getting picked up in google.
Summarising (this thread is becoming extremely long):
- redirects seem to be implement as they should
- user engagement seems to be improved after migration
Performance seems to be & has been an issue - with unresponsive scripts & pages which are loading quite slow. Quality of HTML is (was) quite lousy.
I would stick to my original advice - and ask an HTML / CSS guru to have a look at your code, clean it & implement some of the performance improvements that were already mentioned before (to reduce the time to first byte). One thing you could already correct in the code is the location of an embedded javascript - it should be in the body or head - currently one script is outside the HTML tag.
Good luck
Dirk
-
RE: Hash Bang
Without additional info like the actual url questions like this are impossible to answer. Did you see the rankings for you major keywords go down? Is your site still accessible for bots? Is analytics working properly?...
Dirk
PS You still have an open question here - https://moz.com/community/q/access-to-ga-revoked - you might mark that one as answered
-
RE: Tools to group large list of keywords
Found this one recently - look promising: http://kwgrouper.huballin.com/
-
RE: Is it problematic for Google when the site of a subdomain is on a different host than the site of the primary domain?
Hi Martin,
This should not have an impact. Google is focussed on providing the best user experience, providing relevant content to its visitors, adapted to the device they are using & served in an efficient way (performance). How you organise your technical architecture behind it should have no importance. You would certainly not be the first to have a website running on different servers for different parts of the site. Typical example is a shop using Magento or some other e-commerce application on 1 host, and a blog on Wordpress on another host.
That said - there are still some points you need to take into consideration:
1. Server location - while probably not as important as it's used to - it's probably better to keep the location closer to your main audience (mainly because it can have an impact on performance. Quote from http://googlewebmastercentral.blogspot.com.es/2010/03/working-with-multi-regional-websites.html on location as ranking factor
Server location (through the IP address of the server) is frequently near your users. However, some websites use distributed content delivery networks (CDNs) or are hosted in a country with better webserver infrastructure, so we try not to rely on the server location alone.
If both servers are in the same location it should not be a problem.
2. You might consider to shift the subdomain to a directory as Google seem to prefer all content on one subdomain. You might want to check this WBF on subdomains vs directories: https://moz.com/blog/subdomains-vs-subfolders-rel-canonical-vs-301-how-to-structure-links-optimally-for-seo-whiteboard-friday
Even when using a directory - it is still possible to use two different servers.
Hope this helps,
Dirk
-
RE: How to maximize CTR from Google image search?
3/4% is already quite a good rate for image search results. By definition image search is quite visual - and people click on the image they like most. Given that a lot more results are displayed in Image search as long as you are in the first 4/5 rows clickrates will be quite similar.
One exception - if Google includes a image strip in the normal SERP's you will get click rates of 30/40%.
Try to rank for different kind of images to cater for each kind of taste. I tried the search query and your site was ranked quite high (1st / 5th position) but compared to the images they were a bit fuzzy/not really sharp.
Certainly don't put very visible watermarks/overlayers - this will completely erase your images from the results (people don't like to click on them show Google doesn't like to show them)
Dirk
-
RE: HELP! My website has been penalized - what did I do wrong?
Hi,
Never heard of SEO Spyglass before - are you sure that the authority rankings they provide are reliable? Did you also notice a decrease in traffic? Did you notice anything special in Google Webmastertools?
When I checked your site I found a lot of content below the fold & pages which are very heavy (>1.5K) due to the images. Is the content you added to the blog original, or taken from other sources?
Dirk
-
RE: Removed Product page on our website, what to do
Hi,
Both redirects & leaving as 404 (or 410) are valid options.
If you are removing this entire category & corresponding products because you stop selling them - you could put a custom 404 (or 410), explaining the visitor that the products are no longer available and you could indicate the alternative products you can offer them.
According to Google (https://support.google.com/webmasters/answer/2409439?hl=en)
"When you remove a page from your site, think about whether that content is moving somewhere else, or whether you no longer plan to have that type of content on your site.
- When moving content to a new URL, redirect the old URL to the new URL—that way when users come to the old URL looking for that content, they’ll be automatically redirected to something relevant to what they were looking for.
- _When you permanently remove content without intending to replace it with newer, related content, let the old URL return a 404 or 410. Currently Google treats 410s (Gone) the same as 404s (Not found). _
Returning a code other than 404 or 410 for a non-existent page (or redirecting users to another page, such as the homepage, instead of returning a 404) can be problematic. Such pages are called soft 404s, and can be confusing to both users and search engines."
You also might want to check this article http://searchengineland.com/googles-matt-cutts-seo-advice-unavailable-e-commerce-products-186882
If you still sell the products - but you moved them to another category, or if you don't sell these products anymore, but you offer very similar products, you could consider putting a 301 to the alternative categories/products. As example you stop selling white tulips "Amsterdam" - but you still have white tulips "Utrecht" - you could redirect the first to the second. They are not identical, but an acceptable alternative for most visitors.
In your specific case - I guess that you removed the category, but that it will be coming back next year. In that case, it's maybe better to keep the pages, but only remove the links to these pages. On the products themselves, you mention something like 'pre-ordering start in Jan. 2017. Check out of fall offers" and you mark them als "out-of-stock". You then just remove the links on your site to this subcategory. (this is quite similar to e-commerce shops with specific Christmas pages - these remain online all year long, but are online made visible as of September)
The reason why Webmaster tools is sending you the message that these pages are not found is just to inform you. It could well be that these 404 are unintentional, and by informing you you can take the necessary measures. If the 404 is intentional, you don't really have to do anything.
Just make sure that you also update your internal linking - to be sure that no internal links go to the pages you removed. Screaming Frog can help you to check this.
Hope this clarifies,
Dirk
-
RE: A specific keyword has dropped from #1 in Google to nowhere at all...
The way you implemented the link to the mobile site is wrong: you have just put a plain
rel="alternate" href="http://mrnutcase.mobi/en-GB/personalised-macbook-cover/"/
It should be like the example here https://developers.google.com/webmasters/mobile-sites/mobile-seo/separate-urls
On your desktop site put:
href="http://mrnutcase.mobi/en-GB/personalised-macbook-cover/">
On your mobile site put:
Additional issue is that for the page you put as alternate (the mobile version http://mrnutcase.mobi/en-GB/personalised-macbook-cover/) gives a 404 - so you should correct that. This is probably the main reason why the page is no longer ranked.
Dirk
-
RE: HELP: What happened to my rankings? No warning from google how to know if i was penalised?
Hi Justin,
Normally everything should return to normal after a few days. You could try to speed up the process a bit by taking your key landing pages from Analytics. Fetch these pages in Webmaster tools (Fetch like Google) - when they are fetched submit them to the index (off course you first have to remove the noindex tag).
It's a quite common mistake - we had a similar case with a test robots.txt which was put in production. I was on holiday at the time and only noticed the error when I returned (3 days after go live). Everything returned to normal within a day.
rgds,
Dirk
-
RE: Google is Still Blocking Pages Unblocked 1 Month ago in Robots
Hi,
Fetch the main page(s) with "Fetch as Google" under the Crawl section in Webmaster tools - then submit to the index.
You are sure that there are no other elements blocking the indexing of the page (like meta tag or X-robots tags in the header?
Also fetch the new robots.txt file - to be sure that Google notices that it has changed.
Did you add a sitemap for this new section - does it show any notifications/warnings in WMT?
rgds,
Dirk
-
RE: Hhreflang been setup correctly?
Hi Chris,
You have 2 issues here:
1. It doesn't make sense to put the us version on http://www.camilla.com.au/us/ - .com.au is an Australian domain extension and therefore geo targeted by default to Australia. If you want to have a us version you'll have to put it on a generic domain extension (.com / .net / .org / etc geo targeted in search console to the us) or use the .us extension
2. Hreflang is on page level - not on domain level - on all pages of your site you put the hreflang pointing to the "us homepage". What you should do is for each page on your site (both us & au version) you will have to put at least two hreflang tags:
Example http://www.camilla.com.au/collection/my-wandering-heart-resort-15 & http://www.camilla.com.au/us/collection/my-wandering-heart-resort-15
On both pages put:
<link rel="alternate" href="http: www.camilla.com.au="" collection="" my-wandering-heart-resort-15" hreflang="en-au"></link rel="alternate" href="http:>
If you want to have one version as default - then also add
(if it's the us version you want as default - check http://googlewebmastercentral.blogspot.be/2013/04/x-default-hreflang-for-international-pages.html)
The message "reciprocal not found" indicates that you only put the hreflang to the us version - and that on the us version there is no hreflang link to the au version.
You can scan all the content on Moz on hreflang - but useful links are:
https://moz.com/blog/hreflang-behaviour-insights http://www.aleydasolis.com/en/international-seo-tools/hreflang-tags-generator/ https://support.google.com/webmasters/answer/189077?hl=enDirk
-
RE: HELP: What happened to my rankings? No warning from google how to know if i was penalised?
Forgot to mention: don't forget to change it on all three sites.
The new site looks really nice compared to the old one - and speed seems to have improved. However still some work to do:
- time to first byte takes ages - could be related to the configuration of the server or related to some plugin causing delay.
- ask your programmer to use gzip to compress HTML
- minify your css & js
- optimise your images
- modify your caching (time is too short)
The detailed result from webpage test is here: http://www.webpagetest.org/result/150503_A2_BZ7/
Also check https://developers.google.com/speed/pagespeed/insights/ - your score is not too bad but check the improvements that are suggest.
Good luck!
Dirk
-
RE: Bolt on Blog Software
Technically this should be possible - you might want to check this article - http://forums.iis.net/t/1178655.aspx (the best answer is at the bottom of the page) - but maybe the other links provided in the answers can be useful as well
Hope this helps,
Dirk
-
RE: Migrating to WooCommerce, similar product descriptions but with different urls, cant use variations.
Sound like a good idea to avoid the duplicate content. Does it also imply that your visitors are going to land on a page http://siga-sverige.se/siga/fentrim-2 where they can't order the product? Or would that page be like a category page -listing all the different variations (which would probably make more sense)?
Dirk
-
RE: How can I 100% safe get some of my keywords ranking on second & third page?
Hi,
For these type of questions - I fear you will only get very generic answers.
Good starting points would be to study:
https://moz.com/beginners-guide-to-seo
https://moz.com/blog/how-to-rank
& implement what is listed there.
Other useful resources are listed here: https://moz.com/academy
In general, put yourself in the mind of your target audience:
- what are they looking for
- what is the information they need
- how can your content / product help them in fulfilling that need
Check the competition: who are your competitors - what are they doing? How can you improve what they are doing?
If you encounter specific questions or issues you can post them here - the more specific the question is the better the answer (in general)
Good luck,
Dirk -
RE: Help! How to Remove Error Code 901: DNS Errors (But to a URL that doesn't exist!)
Hi,
It's coming from the links in your footer - I attached a screen copy. (the copy is made on http://www.justkeyrings.co.uk/sitemap/ )
Not sure what you're doing wrong - the "bad" link is not always appearing (and it appears with both js enabled/disabled - caching enabled/disabled).
It certainly would not hurt to clean your HTML code a bit - it's should not necessarily pass the W3C validator, but adding basic things like a doc type declaration wouldn't hurt (http://www.htmlhelp.com/tools/validator/doctype.html). It helps the browser to better render your page, I am however not sure that this will help for this specific problem)
Hope this helps a bit with your search.
Dirk
-
RE: Solving pagination issues for e-commerce
You shouldn't use rel canonical for pagination - it's main use is to avoid duplicate content issues. It's possible to combine it with rel next/prev but in very specific cases - example can be found here: https://support.google.com/webmasters/answer/1663744?hl=en :
rel="next" and rel="prev" are orthogonal concepts to rel="canonical". You can include both declarations. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain:
=> as you can see the canonical is used to strip the sessionid which could cause duplicate content issues - not to solve the pagination issue
With rel next/previous you indicate to google that the sequence of pages should be considered as one page - which makes sense if you have like 4/5 pages max. If you have a huge number of pages in a pagination this doesn't really make sense. In that case you could just decide to do nothing - or only have the first page indexed - and the other pages have a noindex/follow tag.
Hope this clarifies.
Dirk
-
RE: What is the difference between rel canonical and 301's?
For the examples you gave I would certainly not use a 301 or use a canonical tag. The content is unique - and only a relatively small part is common (the list)
To explain the difference:
A canonical tag is used if you have pages that are identical (or almost identical) and which are accessible under different url's. A good example is an e-commerce site with a list of articles like mysite.com/umbrellas - if by sorting the products the url is changing like mysite.com/umbrellas&sort=high it's best to put a canonical so that google will not index all the variations. If you use a canonical on the second url -pointing to the first. A visitor can however still access the pages. Google bot normally respects the canonical - but is not obliged to do so.
A 301 is different - in fact you give the message to the browser: this page is no longer available on this location but has moved to a new location. It's no longer possible to visit the original page (not for humans & not for bots). Google bot has to respect this directive.
A last option you can use is the "noindex/follow". This you normally use for pages that have very little value for search engines, but where you still would like the bots to follow and index the pages which are listed. This you can use for pages of type blog.com/tag/subject - that are generating lists with all the articles marked with subject. In general pages like this are good for cross linking, however have low value for search engines so it's better to not have them indexed.
Hope this clarifies,
Dirk
-
RE: 302 redirected links not found
Hi,
While these links may not be visible on the page itself - they do exist in the HTML.
Just take a look at the source code of your homepage & search for the word 'compare' and you'll find links like this:
- class="separator">|<a <span="" class="html-tag">href</a><a <span="" class="html-tag">="</a>http://www.stopwobble.com/catalog/product_compare/add/product/98500/uenc/aHR0cDovL3d3dy5zdG9wd29iYmxlLmNvbS8,/form_key/Vw93RYTNzbI4GGns/" class="link-compare">Add to Compare
which will be followed by bots.
You could put a "nofollow" on these links (probably the easiest solution), block /product_compare using robots.txt - or use javascript to insert the links only when users are logged in.
rgds,
Dirk
-
RE: If I put a piece of content on an external site can I syndicate to my site later using a rel=canonical link?
No - it won't be seen as manipulative, in fact it is the recommended way to syndicate content. Check https://support.google.com/webmasters/answer/139066:
"Addressing syndicated content. If you syndicate your content for publication on other domains, you want to consolidate page ranking to your preferred URL.
To address these issues, we recommend you define a canonical URL for content (or equivalent content) available through multiple URLs"
-
RE: Keyword stuffing?
Hi Bob,
Don't really agree that text "reads ok" - it looks like an SEO (over) optimised text. Not something I would like to read when looking for info on the subject or wanting to buy one.
I'm not a big believer in keyword density - but this site uses the word "waterpijp" 45 times on the homepage - and sisha 22 times - on a total of approx. 950 words which seems a bit over the top.
I would put a copywriter on it and remove all the unnecessary mentions to improve the readability. If not for Google, than for your customers.
Dirk
-
RE: Secure HTTP Change - No Links in WMT
Hi Kevin,
You will have to validate the https version as well - some background info: http://searchengineland.com/google-webmaster-tools-index-status-showing-data-https-protocol-187992 quote: "In order to see your data correctly, you will need to verify all existing variants of your site (www., non-www., HTTPS, subdirectories, subdomains) in Google Webmaster Tools"
rgds,
Dirk
-
RE: Duplicated content multi language / regional websites
I agree with Jordan on this - shouldn't cause troubles.
Just make sure that you at least adapt the wording on the site - we might both speak dutch but not all the words have the same meaning & we don't use the same words to describe the same things. As an example - in Belgium we like "konfituur" - you prefer "jam" - pretty useless to try put a page optimised for "jam" in Belgium as nobody will look for it.
Dirk
-
RE: Strange URL's for client's site
Hi,
You're quite right that having clean readable url's are usefull - both for visitors & bots.
There is no technical need to have these 'ugly' urls - as they can always be rewritten to something nicer. You will have to use a combination of URL rewriting & redirects) - you can find some useful links here on how to implement the rewriting (the article is not very recent - but these basics haven't changed). If they use a CMS it could also be useful to check the documentation - almost every decent CMS offers some build-in rewriting functionality.
The second issue with the strange domain name can be solved with a 301 redirect - by adding these lines in the .htaccess file of the "strange domain"
RewriteEngine On
RewriteCond %{HTTP_HOST} ^olddomain.com$ [OR]
RewriteCond %{HTTP_HOST} ^www.olddomain.com$
RewriteRule (.*)$ http://www.newdomain.com/$1 [R=301,L](no need to tell that you'll have to replace olddomain & newdomain by the actual domain names)
Apart from the wrong domain the issue with the tracking parameters in
could be solved by either a redirect or a canonical url. With the redirect rule above the webwite-px.rtk.com will be redirected to www.yourdomain.com - but this doesn't get rid of the tracking code.
You could put a self referencing canonical url in the head of the pages -
or strip of the parameters using a redirect (you can find an example on how this could be done here
If you use the canonical solution - it could be a good idea to strip off the parameters in Google Analytics
Hope this helps,
Dirk
-
RE: Duplicated content multi language / regional websites
Bob,
It depends on the category & type of product. I remember a Dutch site selling shutters who just put the NL content on a BE domain - problem was that in Belgium we don't use this word when looking for this type of product and hence Google wasn't showing the site (they did rank pos. 1 for shutters in Belgium but probably with 0 traffic)
You don't have to rewrite the content for Google - but it would probably be a good idea to let a Flemish person check the content. If it's just a small word here and there it's no problem - if it's about your main keywords then it's an issue
To reply to your other question - when searching in BE I quite often get NL results if Google doesn't find a good BE result or the NL site is just better. You could just put the content on the be domain - and see if it brings results (even without doing the cross-linking - although I think that would be a useful feature). Belgian backlinks will always help - but it will take time & effort. Take a trial & error approach - there is no risk - if it doesn't work you can always improve later on.
Dirk
-
RE: Is there an SEO advantage to blog content being a child of /blog/ rather than the homepage?
While I agree with the concept of Danny's answer (the farther content is from the root the lower the possibility of ranking) - there is no relationship between the depth of an article & the structure of the url.
To put it extreme - a url like mydomain.com/folder/subfolder/subsubfolder/page could be at 1 click from the homepage & a url like mydomain.com/page could be at 5 clicks from the homepage.
I would migrate to the structure you propose:domain.com/blog/post.htm, mainly because it's easier for reporting purposes. You can find an interesting article from Bruce Clay here on why it's good to have a well structured url: http://www.bruceclay.com/blog/structured-urls/
rgds,
Dirk
-
RE: Search Results Pages Blocked in Robots.txt?
It's probably a good thing - I would keep them blocked.
Check https://www.mattcutts.com/blog/search-results-in-search-results/ - quote "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines."
Dirk
-
RE: URL Parameters
Hi,
You might want to read this article on faceted navigation on the google webmaster blog which gives some good advice on how to handle the situation. What to use depends a bit on your actual situation.
Options include using a nofollow links / use a separate subdomain or block in robots.txt (using a separate folder).On Moz there is this article (the part of faceting) - its mainly about listing sites - but the core problem is more or less similar.
Hope this helps,
Dirk
-
RE: Why is Moz.com saying that none are linking to www.oneworldcetner.eu
Even when the index is updated - it still no guarantee that your links are going to show up. The Moz index is huge - but still only 25% (or less) of the Google index.
Check https://moz.com/help/guides/research-tools/open-site-explorer -
"Just so you know, here's how we compile our index:
- We grab the most recent index.
- We take the top 10 billion URLs with the highest MozRank (with a fixed limit on some of the larger domains).
- We start crawling from the top down until we've crawled 65,000,000,000 pages (which is about 25% the amount in Google's index).
- Therefore, if the site is not linked to by one of these seed URLs (or one of the URLs linked to by them in the next update) then it won't show up in our index. Sorry! :("
Other tools may have different approaches - this is why it's a good idea to combine different sources to get a better idea of which links you gained (ahrefs, semrush, moz,...and so on)
Dirk
-
RE: Where are the crawled URLS in webmaster tools coming from?
These emors are the problems the googlebot encounters while crawling your site. A site map can help the googlebot to better crawl your site but isn't strictly necessary .
rgds
Dirk
-
RE: Mozbot Can Not Crawl Entire Domain
It's caused by the way you have build your site. If you click on redken.com - you get the choice of language. If you select "USA" you're redirected with 302 to redken.com/USA - then with 302 to redken.com/?country=USA then with 302 to redken.com I guess for browsers you store this somewhere (cookie?) - however for a simple bot (like Moz - but I have the same with Screaming Frog) - you just go back where you started = redken.com which again will start the same loop.
So - only 4 url's can be crawled. The other countries are on different url's so will not be included in the crawl.
Google bot is smarter and acts more like a real browser so will crawl the site - but Mozbot can't do that.
rgds
Dirk
Update - I actually forgot one redirect - redken.com first is redirected with 302 to redken.com/international
PS The site is horribly slow as well - and the redirect chain is certainly not helping.
-
RE: How are Server side redirects perceived compared to direct links (on a Directory site)
If I understand it well it will be something like directory.com/listing.htm links to directory.com/companypage which is then redirected to www.company.com (so users actually never see directory.com/companypage).
I guess this type of link will be considered as a "follow" type link as server side redirects pass link juice to the destination (unless they block directory.com/companypage for indexing with their robots.txt and/or they put a nofollow on all the links to directory.com/companypage)
Dirk
-
RE: Moz scraper
If you're talking about Moz Analytics: Moz crawls your site once/week - check here: https://moz.com/help/guides/moz-pro-overview/crawl-diagnostics:
'When you set up your campaign Roger Mozbot does a Starter Crawl of your site within two hours, crawling up to 250 pages. After that, he does a full crawl of your site (up to your page limit) and continues to do a full crawl once a week'
Dirk
-
RE: Sitelinks Issue - Different Languages
If you look at the results on Google fr - I find it more surprising that apart from the first result - all the other results that are shown are coming from the .com version rather than the .fr version. If I search for Revolve cloathing on google.pt - I only get the US results & instagram.
You seem to use a system of ip detection - if you visit the French site from an American ip address you are redirect to the .com version (at least for the desktop version) - check this screenshot from the French site taken with a American ip address: http://www.webpagetest.org/screen_shot.php?test=150930_BN_1DSQ&run=1&cached=0 => this is clearly the US version. Remember that the main googlebot is surfing from a Californian ip - so he will mainly see the US version - there are bots that visit with other ip's but they don't guarantee that these visit with the same frequency & same depth (https://support.google.com/webmasters/answer/6144055?hl=en). This could be the reason of your problem.
On top of that - your HTML is huge - the example page you mention has 13038 lines of HTML code and takes ages to load ( 16sec - http://www.webpagetest.org/result/150930_VJ_1KRP/ ). Size is a whopping 6000KB. Speed score for Google : 39%. You might want to look to that.
Hope this helps,
Dirk
-
RE: Https lock broken possibly due to absolute http header footer image links.
Hi,
Not an encryption specialist - but maybe this question on stackexchange can help you solve the issue. According to one of the comments on the best answer (by tlng05) - this could be done by modifying some server settings to allow clients to use at least one of the listed ciphersuites. In apache, this can be done in your VirtualHost configuration file. There is an SSL config generator you can use to make this easier: mozilla.github.io/server-side-tls/ssl-config-generator
Hope this helps,
Dirk
-
RE: Redirect /label/ to /tags/
Hi,
For questions about redirect Google & Stackoverflow are your best friends:
http://stackoverflow.com/questions/18998608/redirect-folder-to-another-with-htaccess
Put this code in your htacess (if you already have rewrite rules you just have to add the rewrite rules (bold) before or after the existing ones and not the rest of the code - difficult to say if it needs to be before or after - it depends on the rules that you already have)
Options +FollowSymLinks -MultiViews
Turn mod_rewrite on
RewriteEngine On
RewriteBase /RewriteRule ^label/(.*)$ /tags/$1 [L,NC,R=301]
rgds,
Dirk
-
RE: Cleaning up 404s
Hi
According to Matt Cutts a 404 means page is gone (but not necessarily permantent) - permanent would be 410 (although he also indicates that there is very little difference between these two from SEO perspective.
What to do with these 404 depends a bit on the situation
-
if these pages have external links pointing to them - I would try to redirect (even better ask the one who's linking to update the links although maybe difficult in practice)
-
if these are old url's which haven't been used for a while & don't generate traffic - just leave them - they will disappear
-
where do these 404 come from - if they are just listed in WMT - you can ignore them. If it's actual people trying to visit your site on these pages I would try to redirect them to the appropriate new page (if not for the SEO than for the user experience)
-
check that no internal links exist to these old 404 pages (Screaming Frog is made for this)
You say you 301'd the pages of the old site - did you check your landing page report in Analytics before migration - you should make sure that these top 5000 url's are properly redirected - normally Google will figure it out after a while, but it can have a negative impact on your results if a lot of these landingpages generate a 404.
As additional resource - probably a bit too late now you could check the different steps in this guide: http://moz.com/blog/web-site-migration-guide-tips-for-seos to be sure that you didn't miss something important.
Hope this helps
Dirk
-
-
RE: How bad is it to have duplicate content across http:// and https:// versions of the site?
The biggest problem with duplicate content (in most cases) is that you leave it up to Google (or search engines in general) to decide to which version they are going to send the traffic.
I assume that if you have a site in https you would want all visits to go to the https version. Rather than using a canonical url (which is merely a friendly request) I would 301 the http to the https version.
Dirk
-
RE: Referral Traffic from Google
In countries where the "old" image search is still used (like Germany) - it is counted as Referral traffic rather than search traffic (not sure why). You could check in Webmastertools what your % of image search is (and compare January & February)
You could also try to check the landing pages report for this referral traffic and compare this with the landingpages in Acquisition >Search Engine Optimisation > Landingpages with secondary dimension "Google Property"
rgds,
Dirk
-
RE: My wepgages aren't crawled by google
Hi
You didn't answer the first part of Zoe's question - are you sure that your site is crawlable and that there are no issues with the robots.txt / noindex tags, ip detection systems, canonicals on all pages pointing to the home and so on. It's not because you can see all pages of your site in a browser that they are accessible/crawlable/indexable by google.
Try a crawl with Screaming Frog and user agent Googlebot to see if your pages can be crawled and indexed.
Backlinks are needed to have your site ranked for keywords - but it's not a prerequisite to have your site crawled. (noticed that a few times when a dev site was indexed by accident)
Without the actual url it's impossible to give a more detailed answer.
Dirk
-
RE: Is there an efficient way to block/filter referral spam in Google Analytics for a large network of websites?
Just to add to the answer of Donna - migrating Analytics to the Tag manager is not going to help. They are equally affected (it's quite easy to retrieve the analytics id which is inside).
rgds,
Dirk
-
RE: Duplicate Content Showing up on Moz Crawl | www. vs. no-www.
If the redirection was done quite recently - it's possible that Moz hasn't recrawled your site (crawls are once/week)
If this isn't the case - check the http header of the pages which Moz indicates as duplicates, to be sure that they are properly redirected (with a tool like httpstatus.io).
Adding a canonical never hurts but it's better to solve the problem at the root.
Dirk
-
RE: Google Analytics and Bounce Rates Query - Should I block access from foreign countries ?
Hi Michael
The first thing you should do is to define the geography you are targeting in webmaster tools (If you have a generic tld).
if you would block visitors from Brazil on your site the bounce rate measured by analytics will go down. However, Google is not using your analytics data to measure the bounce rate (at least that is what they claim). As al these people get an error message when they try to visit your site,the real bounce-rate will increase rather than go down, making the situation even worse, you just would not be aware of it.
What you could do is set a custom filter in analytics, showing only the traffic from your target country and apply this filter to a new view. This gives you better insights on the behavior of your target audience.
rgds
Dirk
-
RE: How does link juice flow through hreflang?
Not sure if what Dmitrii is stating is correct.
If you check the comments here https://www.distilled.net/blog/distilled/distilledlive-london-a-few-thoughts-on-hreflang/ they state:
" hreflang anotations do not consolidate link equity." (source: Maile Ohye (Google's Developer Programs Tech Lead) at SES London) "Hreflang was not designed to consolidate link authority" (source John Mu - chat with David Sottimano)Also on Moz - Gianluca seems to be convinced of the same - https://moz.com/community/q/will-website-with-tag-hreflang-pass-link-juice-to-other-country-language-version-of-website -
Dirk
-
RE: GA Event: to use this feature visit: EVENT-TRACKING.COM
Analia,
It seems to be a new kind of analytics spam - just checked an analytics account for a site which is under construction (only one page - no real visitors & no events set-up).
I saw the same event appearing in my stats. Others see the same thing as well: http://www.redcardinal.ie/general/07-05-2015/just-when-you-thought-ga-referrer-spam-was-bad/Not sure how these clean these events from the reporting.
rgds
Dirk
Edit: also 2 mentions in Google Analytics forum: https://productforums.google.com/forum/#!searchin/analytics/to$20use$20this$20feature$20visit$3A$20EVENT-TRACKING.COM
-
RE: How does link juice flow through hreflang?
In that case it will reinforce the domain (like any external link to any page on the domain).
It's just that a link to domain.com/es/page will not count as a link to domain.com/en/page even when they are "linked" via the hreflang tag. Idem where the domains are different. ex domain.es/page & domain.co.uk/page - a link to .es page will not count for the .co.uk page (and domain) even when they are connected via the hreflang.
Dirk
-
RE: Does Universal Analytics auto generate events?
Universal Analytics is not tracking by default (even in Drupal)- it must have been something that has been set up in the Analytics plugin by your Developer.
If made a mistake in my first answer - you should look for Analytics in the source (and not events) - you will notice a bit of code stating:
"googleanalytics":{"trackOutbound":1,"trackMailto":1,"trackDownload":1,"trackDownloadExtensions":"7z|aac|arc|arj|asf|asx|avi|bin|csv|doc(x|m)?|dot(x|m)?|exe|flv|gif|gz|gzip|hqx|jar|jpe?g|js|mp(2|3|4|e?g)|mov(ie)?|msi|msp|pdf|phps|png|ppt(x|m)?|pot(x|m)?|pps(x|m)?|ppam|sld(x|m)?|thmx|qtm?|ra(m|r)?|sea|sit|tar|tgz|torrent|txt|wav|wma|wmv|wpd|xls(x|m|b)?|xlt(x|m)|xlam|xml|z|zip","trackDomainMode":"1","trackUrlFragments":1},"field_group":{"fieldset":"full"},"scheduler_settings":{"scheduler_local_storage":1,"ttl":
=> this bit of code is coming from the module and is tracking all downloads of the extensions listed (7z,aac, arc....etc)
Hope this clarifies,
Dirk