Did you check these pages with the robots.txt tester in Webmastertools to be sure that these pages are really blocked for bots? Did you exclude all bots or only the Googlebot?
![DirkC DirkC](/community/q/assets/uploads/profile/6580-profileavatar-1619582445289.png)
Best posts made by DirkC
-
RE: Moz crawler showing pages blocked by robots.txt
-
RE: Hosting Change & It's Impact on SERP Performance (with a Side of Domain Migration)
Thanks for the compliment Egol!
-
RE: How to find outbound links
Hi Figen,
I fear that will be difficult to find - I did a quick google search & first result was this topic on Moz from Jan. 2013 (http://moz.com/community/q/how-do-you-check-the-outbound-links-of-a-site) . The answers basically are the same - it's possible to to a check outgoing links on one page using tools- however if you want to do it on a full domain - you'll need a tool like Screaming Frog (or alternative Xenu LinkSleuth).
If Google isn't providing the info, all the other potential solutions would require crawling sites - most of the tools available on the market give information about who is linking to your site (or you competitor's site for that matter) - but not the other way.
Dirk
-
RE: Using Canonical Tags on Every Page
Hi,
it's not really necessary to do it, but can be useful to make sure that the right url is being indexed and to avoid duplicate content issues.. Example - using canonical avoids that pages like site.com/index.htm&trackingid=xyz are indexed - only the correct site.com/index.htm will be indexed. Another example could be articles which could published in two sections, but only may be indexed in one section. Be carefuI, I sometimes see examples where the canonical is always identical to the url - even for the examples as given before, which basically renders the canonical useles.
If you are sure that each page of your site has one unique url than you don't need the canonical url.
rgds
Dirk
-
RE: Abnormal crawl issues appearing in my Moz results
Moz is only indexing pages it's crawler is able to find. This implies that on your production site you have links to your development site.
Don't really agree with what your dev is saying - he should correct these links first; put a noindex on these pages. Alternative - put a password on the dev site so it's only accessible with a password. If a lot of users are putting links to your dev site it could become more important than your main site. Google will try to choose the most appropriate site - but you have no guarantee that it will choose the right version. In any case - that's not the type of risk you should be willing to take.
Once this is done - you can request a removal of these pages via the search console.
If all pages are removed from the index you can adapt the robots.txt to prevent access to the Google & other bots. Do this only after all pages are removed - if not Google will never find the noindex directive.
Dirk
-
RE: No back links showing in site explorer but..
Hi,
You might want to check the FAQ on OSE which discusses the most obvious reason why no links are showing up and why there is a difference between OSE & Google WMT.
rgds,
Dirk
-
RE: Hosting Change & It's Impact on SERP Performance (with a Side of Domain Migration)
Hi,
Check the answer of Cyrus Shepard on this question: http://moz.com/community/q/are-cdn-s-good-or-bad-for-seo-edmonton-web - I fear it will be hard to find other quantified data.
"We've done a lot of studies here at Moz and what we've found is this:
- There does seem to be a slight correlation between site speed and rankings (keep in mind that correlation is not causation)
- Our studies have not found a relationship between CDNs and higher rankings.
So the evidence would seem to suggest that CDNs can help your website speed, and it's possible to help your rankings, simply using a CDN by itself is no guarantee."
Hope this helps,
Dirk
-
RE: Link juice from links that are not on the page?
Not sure if it's a spammy tactic. If you look at the cached version of these pages you'll notice that it contains a section on the right hand site: Travel blogs we like (http://webcache.googleusercontent.com/search?q=cache:GS8irLH9VKkJ:www.travelsupermarket.com/blog/learn-to-make-a-monkey-out-of-a-towel/towel-origami-109/+&cd=2&hl=nl&ct=clnk&gl=be) - with a link to "what's Dave doing"
Date of Cache for this example is Sept. 15th. It's quite possible that in the mean time Travelsupermarket decided to drop that section on the page. Normally these links should disappear in the coming months as these pages get crawled again.
Dirk
-
RE: Links in my website are indexed as separate pages
It's probably an error in the code - best way to solve it is to check the code where the link originates & correct it.
Alternative would be to put a canonical on www.xyz.com/person/name pointing to itself (www.xyz.com/person/name) - normally the www.xyz.com/person/name?alinks should then have the canonical as well pointing to www.xyz.com/person/name
Without have the site itself it's difficult to tell which the error is.
rgds,
Dirk
-
RE: Abnormal crawl issues appearing in my Moz results
Hi Sarah,
Googlebot will follow these links as well and discover these "useless" pages (the are off course not useless from human perspective but they don't add value for bots - and they will be considered as duplicates). Duplicates are no reason for "punishment" - so you could just let them be. Personally I would put a nofollow on these links or add a "noindex" tag to the login page. Normally you shouldn't use nofollow on internal links - but login pages are an exemption on this (check also https://searchenginewatch.com/sew/news/2298312/matt-cutts-you-dont-have-to-nofollow-internal-links : "Of course, there are always exceptions to the rule, and things like login pages can be the exception. He said it doesn’t hurt to put the nofollow link for a link pointing to a login page, or things like terms and conditions or other “useless” pages. However, it doesn’t hurt at all for those pages to be crawled by Google."
For the practical part - if you add an additional question to a question which has been marked as answered - only the ones who have already answered will see the additional question. To be on the safe side - it's better open a new question if you want other people to have a look at it.
Hope this helps,
Dirk
-
RE: Why I am not seeing any inbound link to my website?
There is one link in Open Site Explorer (in the fresh discovered section: https://moz.com/researchtools/ose/just-discovered?site=http%3A%2F%2Fb5footballcup.com&filter=&source=&target=page&page=1&sort=crawled)
It's quite possible that there are more links to your site, and that Moz hasn't discovered them yet. You should know however that no tool will be able to discover all the links to your site. Most of them can find the links from high ranked sites, but the more links you have from smaller & less well known sites, the more difficulty the tools will have to find these links.
There was a similar question a few days ago - you might want to check these answers as well : http://moz.com/community/q/link-building-links-aren-t-showing-on-moz-or-semrush
rgds,
Dirk
-
RE: How Do I Scan My New Site & Grade My Work With The Robots Turned Off? For Pre-Inspection before I launch my Site?
You could block the site for all bots except rogerbot (
Mozilla/5.0 (compatible; rogerBot/1.0; http://www.seomoz.org/dp/rogerbot).
Alternative would be to use a local crawler like Screaming Frog (and modify the spider settings so that it ignores the robots.txt)
rgds,
Dirk
-
RE: Any Idea who i can contact at Google Finance?
Not sure if it's going to work but Karolina Netolicka seems to be product manager for Google Finance - you could try to contact her via her Google Plus Account - alternatively send her an invite / Inmail to her Linkedin Profile - she's also the one who posts on the Google Finance Blog
If you know the syntax of general Google adresses - you could try that as well (very often it's something like firstname@google.com or (firstletterfirstname)familyname@google.com)
Good luck,
Dirk
-
RE: Implementing Schema.org on a web page
Hi,
I am not saying that schema is bad or that you shouldn't do it - it just seems that some big players only use schema on detail pages of an individual product & not on the overview pages. I found an example of site using it - but in the serp's it's only the average rating which appears (example http://www.goodreads.com/author/list/7779.Arthur_C_Clarke). The first result
You can always test what the impact will be - as mentioned before - I guess even for 50 elements fully tagged with Schema the impact on page speed will be minimal. Check your curent pages with webpagetest.org - see the repartition of load time. Probably the html will only account for 10-20% of the load time - rest being images, javascript & css files. Adding a few hundred lines of HTML will not fundamentally change this (text can be compressed quite well)
rgds
Dirk
-
RE: SEO impact of redirecting high ranking mirror site to the main website
If you want to know how much traffic this domain is getting you could easily put a filter in Analytics which allows you to see the domain name when checking the pageviews.
Create a new view in Analytics and apply a filter with following settings:
Filter Type: Custom
Select: Advanced
Field A -> Extract A: Select: Hostname value: (.)
Field B -> Extract B: Select: Request URI value: (.)
Output To -> Constructor Select: Request URI value: /$A1$B1
Mark Field A: required & Mark "Override Output FieldYou could also combine all the different copies in one set in the Search Console - which would make it easier to check if there is an impact when activating the 301's.
Regardless of the traffic, the 301 remains the best option - some people say it does dilute pagerank - Google says it doesn't
As both Search Console & Analytics are not retroactive you could already make them active, wait a few weeks in order to collect sufficient data and then activate the redirects.
Dirk
-
RE: Not existing domains linking to my website (spam)
Hi,
I don't really agree with the answer of Verbinteractive: disavowing links in WMT only concerns Google - there is no way Moz OSE is able to know that these links have been disavowed.
If these sites no longer exist than it's quite possible they will disappear in OSE, but not because of the disavow. If the links are from crappy sites it's quite possible that Moz crawled them once, but is not crawling them again (while the number of pages they crawl is huge - it's still limited).
Wait for the next crawl (Sept. 9th) & if these links are still present I would contact Moz directly.
rgds
Dirk
-
RE: New non-www. web address but the domain is the same
No need to set up a new analytics - the old one will work just fine.
If both the www & non-www would be active at the same time and if you would still use the previous version of analytics (i.e. not Universal analytics) - you would need to make a modification to your tracking code - but as far as I understand this won't be the case.
Dirk
-
RE: How best to roll out updated website to new responsive layout
Hi,
If you would have asked this question before April 21st I would have said to publish the responsive pages the moment they were ready, even if the remaining pages are having a different design.
At this point, it seems that the impact of Mobilegeddon is quite limited, so if you can finish the complete site in a couple of weeks, I would rather go for the better user experience and present the full site with the new design.
If in the next days the mobile update would have serious impact on search results, you can always decide to publish the pages which are already responsive.
Keep an eye on this post - https://moz.com/blog/day-after-mobilegeddon - if the update really starts having an big impact, you'll read it there. (or follow Moz / Searchengineland / Searchenginewatch / ... on twitter & you'll be to first to know)
rgds,
Dirk
-
RE: Implementing Schema.org on a web page
Hi,
I am not sure I adding schema.org on a result page is adding a lot of value. If you send 50 different blocks of structured data how should search engines understand which piece would be relevant to be shown in SERPS. I just did a check on 2 different sites (allrecipes.com & monster.com) - they only seem to use the schema markup on the detail pages - not on the result pages.
If you would like to go ahead - you could always try to measure the impact on the page by creating two (static) versions of a search result page - one with & one without markup and test both versions with webpagetest.org & Google page speed analyser. An alternative would be to using "lazy loading" - you first load the first x results (visible part on screen), when the user scrolls you load the next batch ...and so on. This way, the impact on loading times would remain minimal.
In each case, I would not try to show different pages to users & bots.
rgds,
Dirk
-
RE: Drop in traffic at start of December 2014
Hi Dave,
How is overall traffic - did you see the same traffic drop. Sometimes Google traffic is reported as referrer traffic (for one of the sites I manage which is largely dependant on image search almost all google traffic is reported as referrer rather than search traffic)
I noticed that you're using a (very) old version of the GA tracker - consider updating to the last version (universal). If I check the page with the Tag Assistant plugin I get 3 warnings (sync version / code outside head / depreciated method get_tracker)
Dirk
-
RE: Not existing domains linking to my website (spam)
Agree, coffee does make a difference. 3pm here in Europe so I already had my daily dose
-
RE: Our website has 8 subdomains for each country, do i need to set up a campaign for each? And have to upgrade to have more than 5 campaigns?
When creating a campaign by default it will track all subdomains. If you want to limit only to subdomain select 'Exclude subdomains to restrict this campaign to the domain you enter' under the advanced tab. If I'm not mistaken you have this option only when creating a campaign - you can't change it afterwards.
If need want to track all subdomains with a separate campaign - you will have to upgrade to the next tier.
Dirk
-
RE: What's brewing on YouMoz? (And how you can Help)
Hi Ronell,
Nice ideas but when you do a post to promote Youmoz you might also consider speeding up the review process a bit. Not really motivating if you submit a post to get the message "We'll review your article as swiftly as we can, but because of the high volume of posts we receive, it could take several weeks for us to get to yours. ". This is not really in line with your answer to the question of Donna ("Turnaround has more to do with the quality of the post than anything else") - you can't judge the quality of a post before reading it.
Dirk
-
RE: Dynamic URL Parameters + Woocommerce create 404 errors
Probably the easiest way to deal with this is to chose one of these pages as the main page & use canonicals on the other versions to the main one.
So - suppose you would chose _/?show_products=all _as the main page - than page /?show_products=48 would have /?show_products=all as canonical.
As you also use pagination - the option of the canonical offers the advantage that you can put the /?show_products_all as canonical url on all the pages (check also this page: https://support.google.com/webmasters/answer/1663744?hl=en)Another option would be to put a nofollow on the other display options. So by default people would arrive on /?show_products=24 - the links on show 48 - all would be of type "nofollow'
You might want to check this article: http://googlewebmastercentral.blogspot.be/2014/02/faceted-navigation-best-and-5-of-worst.html - the scope of the article is a bit larger than the question you as here but some of the points are applicable.
Hope this helps;
Dirk
-
RE: Site Migration between CMS's
Hi Neal,
If I understand what you're saying - you're not really doing a migration - you will be running two CMS's in parallel; the current content will remain in Joomla & the new url will be created in Wordpress. People visiting the homepage will be redirected to site.com/wordpress_site/index.htm
If this is the case, this solution doesn't really look future proof. First of all there is the maintenance issue, you'll have to maintain two systems. Very soon, you will probably only update the Wordpress site & neglect the Joomla site and the two sites will become completely disconnected (and the Joomla content outdated). At one point, you will probably want to retire the Joomla application & then you will have to redirect the entire site to the root again.
I had a similar case when we did a partial migration (part of the site migrated to the new cms in a subfolder / part remained on the old cms on the root). The part which was migrated was one of the traffic generators of the site & we really had a very big drop in traffic (as this new part was almost disconnected from the rest of the site. To be honest, there were also a lot of other issues with the new cms, which probably had part in the traffic loss )
I would recommend you to check if you can't migrate the content from the old cms into the new one (at least the part which is generating traffic). If this is not possible, or difficult on short timeframe - I would 301 the Joomla site to archive.site.com (or site.com/archive) & put the new site on the root. In a second step - you can then put the content from the old site into the new one and redirect from the archive to the new site.
Dirk
-
RE: Open site explorer: error message "There was an error getting your data"
Hi Mary,
Maybe a temporary glitch in OSE? I just tried the domain in OSE & I get results without problems.
rgds,
Dirk
-
RE: Brand Name Cratering - possible N-SEO or Black Hat Attacks
Hi,
Although you don't literally mention the name of the site in your question - it can be found based on the info in the PR message.
If I'm right, the site is entirely made in Flash (never knew this still existed). While technically, Flash can be indexed by Google, it's certainly not the best technology from a SEO purpose (read this article http://moz.com/blog/flash-and-seo-compelling-reasons-why-search-engines-flash-still-dont-mix). It isn't the best technology from a user perspective either (the home page takes ages to load, and the final result doesn't really look very attractive - check http://www.webpagetest.org/video/compare.php?tests=150131_SB_TB8-r:1-c:0). If you really want to stick to flash - there is some info to be found here: http://blog.woorank.com/2013/01/how-to-optimize-flash-for-seo/ - although this post also clearly points out that it's to be avoided.
The biggest problem with your site is that you are serving different content to bots (javascript disabled) and to human visitors (the non-javascript version of your site is completely different than the normal flash version). This is probably done with the best intentions in order to make the flash content accessible for bots, but I fear Google is considering this as cloaking. If you look at how Google sees your site - there is certainly room for improvement: http://goo.gl/Pvj51j . Main content is not really visible, not really easy to read - basically ignoring all the best practices which you can find on Moz.
I don't know why Google suddenly decided to drop your site from the listings, but I doubt that it is the result of any negative SEO effort. To put it very bluntly, it seems rather the result of sticking to a technology which is long past its expiry date.
Dirk
-
RE: What's brewing on YouMoz? (And how you can Help)
Hi Ronell
Just to clarify - I don't argue that there is a the link between review time & quality of the post. I am quite happy with the quality of most of the posts on Youmoz so you are certainly doing a good job.
I am not a native English speaker. If I get a message that it can take weeks before an article is reviewed for me this is equal to "it can take weeks before we read your post". Hence my remark - you can't judge the quality of a post before reading it and so your reply to Donna sounded a bit odd to me.
I understand from your answer that you do a quick review of the new posts every day. Probably it would be better to state this in the initial message (we did a quick review and it's a gem/interesting but will need some rework/needs to be completely rewritten/completely rubbish) rather than stating a generic message that it will take weeks to review.
Reading between the lines - by getting the generic message I understand that my post is simply not good enough at this point and is hidden somewhere at the bottom of the pile
Dirk
-
RE: Dynamic URL Parameters + Woocommerce create 404 errors
Hi Joost,
Unfortunately we don't use Wordpress in our company - I know the basics of the Yoast plugin but I am not really an expert in how to configure it. Maybe somebody else on Moz can help you on how to configure it. My guess would be to fill in the full url - you could always try & see if this works.
There are quite a lot of users on the forum who use Yoast - so I guess they should be able to help you with the configuration.
rgds,
Dirk
-
RE: 301 redirect with Magento; still Page authority 0 after 6 weeks
Using 301 is that standard way of proceeding if you need to change the structure of your site. If done properly, there is no impact on your rankings (always make sure to update your internal links - avoid 301's on internal links). As a rule of thumb, don't change your url's too often, because the redirects can get quite messy after a few years.
For the ranking question, I wouldn't know how long it takes, I mainly track traffic, not really page authority, so can't help you on that one.
Unrelated to your question, but could have an impact on your SEO: I was a bit surprised by your choice of Magento for a smaller shop. Magento is known for being quite hungry for resources, and normally requires a dedicated server to run on. I sometimes had difficulty accessing the pages.
When I did a speed check the final result was ok, but 'time to first byte' was quite long, which could indicate that your server is not really up to its task. Another strange finding was that a very large part of the content that is downloaded are javascriptfiles (65% of total weight of the page) and css files (20%) - you should try to regroup these files & minify them. Also try to tune your caching settings - a lot of static resources could be cached. Full details can be found here: http://www.webpagetest.org/result/150204_1H_1A91/ - you should also check the Page Speed Insights tool from Google: https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fhippemamashop.nl%2F&tab=desktop.
On your productpages, some of the descriptions are quite short, and maybe not 100% optimised for search (and for your target audience). Additional info is not always correct 'Materiaal: Nee' (Lief sokjes). Try to find a unique tone of voice for your site - you are trying to attract young fashionable mothers - so you probably should adapt your descriptions to that target audience
rgds,
Dirk
-
RE: Only two pages crawled
Without the actual site it's difficult to say. Did you check your robots.txt? Do you block robot's in your headers? Is your site entirely made in Flash or Ajax?
If you send the site in a PM I could have a look.Dirk
-
RE: Top level domains showing wrong meta tag des in different country
Hi Justin,
The tags are now completely wrong - on the .com version it lists (seems the same on .com.au & the .co.nz)
Not really that technical, but check with your programmer - he must be able to find a way to get this right. The x-default is not that important in your case I guess as you mainly target these 3 markets. If you can't fix it it's better to take them off because now you're sending a very confusing message to Google (for en-us it now has the choice between 3 url's - and for the other languages there is no alternate url mentioned...)
rgds
Dirk
-
RE: Images & Duplicate Content Issues
To quote Google: Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar.
Having one image on two different posts - even when alt & title are identical will hardly qualify as duplicate content. A lot of sites (even well known publishers) are reusing the same images on their pages - if you're talking about stock photos - these are used on millions of sites. You won't be able to rank them for Image Search - but they won't hurt you either.
Dirk
-
RE: Screaming Frog - What are your "go to" tasks you use it for?
I am a big fan of Screaming frog myself. Apart from the real basic stuff (checking H1, titles,...etc) it's also useful to check if all your pages contain your analytics tag and to check the size of the images on the site (these things Moz can't do).
It's also extremely useful when you're changing the url structure to check if all the redirects are properly implemented.
Sometimes you get loops in your site, especially if you use relative rather than absolute links on your site - Screaming Frog has an extremely helpful feature: just click on the url and select "crawl path report" - which generates an xls which shows the page where the problem originates
It's also very convenient that you can configure the spider to ignore robots.txt / nofollow / noindex when you are test a site in a pre-production environment. Idem for the possibility to use regex to filter some of the url's while crawling (especially useful for big sites if the they aren't using canonicals or noindex where they should use it)
rgds,
Dirk
-
RE: Inurl: search shows results without keyword in URL
Hi Theo,
We encountered something similar when we migrated a site. We properly redirected all the old url's to the new one, however, in the weeks after the migration, we saw a huge increase of 404 in the webmastertools.
When we took a closer look to these url's, we noticed that these where using an url structure we had abandoned several years ago. On the "old" site, these were redirected, but we didn't implement these old redirections after migration as we assumed that these very old url's wouldn't be in the index anymore. We proved wrong. We could delete them manually from the index using webmaster tools, because they used folders we are not using any longer, this is not probably not possible in your case.
While it is a bit annoying, I don't think that having these "phantom" url's in the index is doing you any harm in terms of SEO. They will probably never pop-up for normal search queries, only when you do in-depth queries, limiting the results to only your site.
rgds,
Dirk
-
RE: How to force moz to crawl my backlinks?
Hi
Well - given the troubles Moz has to build the regular index in the past months I doubt they will add this feature anywhere soon.
Don't forget that DA & Trust is an internal metric of Moz - which tries to predict how well your site will rank in the SERP's. As all tools, it is just an estimate of the real thing.
Dirk
-
RE: Top level domains showing wrong meta tag des in different country
Hi Justin,
I fear I made a mistake in the url's - you have to use the full url
Sorry.
Dirk
-
RE: Geotargeting in Webmaster Tools - Is there an expected ranking benefit in the geotargeted region ?
Hi Dan,
As far as I know geo targeting in Webmastertools in only possible on the domain level (and only for generic tld's like .net, .com,...etc). I agree with Robert, in your case I would certainly not do it, as you try to attract 3 different countries with one TLD.
For one of the Spanish sites I manage we tried the geo targeting in GWT, but it didn't seem to give specific benefits. The only measurable effect we noticed was that the traffic from S-America was decreasing (slightly) , but there was no increase in traffic from Spain. We didn't experience a significant increase in rankings in Spain either, so we removed the geo targeting (this site had quite generic content, so not really linked to a specific location in Spain).
We still have the geo targeting active on a shop specifically targeting the Spanish market, but we still get approx. 45% of our traffic from S-America (which from our perspective is useless traffic as we don't ship to these countries)
Dirk
-
RE: Dynamic URL Parameters + Woocommerce create 404 errors
Joost
You might want to open a new question (on how to filter) - this question is too deep down the list & marked as answered so I fear nobody is going to see your additional question.
rgds,
Dirk
-
RE: Creating a Landing Page with a Separate Domain to Control Bounce Rate
I fully agree with Egol - the moment you start "manipulating" traffic just to please Google, you're taking the wrong direction. I can maybe work for a few months, even years, but in the end it's always a bad strategy.
The only valid strategy is trying to figure out how to please your visitors, and traffic & Google will follow. It's not always easy to cope with the pressure for to change things, because your competitors are doing it that way, or because you have certain targets, but you'll win in the long run.
-
RE: MOZ competitor analysis tool
None of the tools on the market is able to find all links to your site - and different tools will find different links. If you want to have the best view on who's linking to your site you will have to combine several tools (ahref/moz/semrush/search console/...).
Ahref used to be the tool who was able to find the most links (not sure if it's still the case) - which is in line with what you're seeing. Moz does a lot of processing of the data it's finding to calculate the page/domain authority - so they need to limit the index they work on (even if it's huge).Dirk
-
RE: Top level domains showing wrong meta tag des in different country
Justin,
Seems to be ok now - tested the 3 url's and all are ok (homepage)
Don't forget to update the pages inside the site:
Example: https://www.zenory.com/tarot-readings
Here the hreflang should become
Currently they all point to the homepage.
rgds,
Dirk
-
RE: 2 Top level domains - not ranking?
Hi Justin,
You might want to look at your server configuration - all the domains seem to have some config issues (no SOA record / no DNSSEC processing). I am not really into this technical stuff, and cannot judge if these issues are really important or not.
http://dnscheck.pingdom.com/?domain=www.zenory.com.au×tamp=1429174636&view=1
http://dnscheck.pingdom.com/?domain=www.zenory.co.nz×tamp=1429174857&view=1
http://dnscheck.pingdom.com/?domain=www.zenory.co.nz×tamp=1429174857&view=1
(I noticed when I tried to crawl the .com.au site with Screaming Frog and got a Connection Error)
Apart from that, as mentioned in a previous answer, the fact that prices are listed in NZ$ is probably a bit strange for .com (and .com.au users). Language used is NZ version - in the States spelling is slightly different (ex. behaviour vs behavior).
Did you manage to create sitemaps for the US & AU versions?Not related to the ranking issues - but the homepage seems to be very heavy - you might want to work on that (http://www.webpagetest.org/result/150416_2D_J3X/)
Hope this helps,
Dirk
-
RE: Pages giving both 200 and 302 reponce codes?
Hi,
The message you get from the seobook tool is quite straightforward - trying the url https://www.equipashop.ie/shop-fittings-retail-equipment/gridwall/gridwall-shelves/flat-gridwall-shelf.html generates a 302
Then the tool tries the redirect url (https://www.equipashop.ie/flat-gridwall-shelf.html) - which gives status 200.
When I visited the site myself - on first visit - I was redirected to https://www.equipashop.ie/flat-gridwall-shelf.html - when I tried a second visit - I remained on https://www.equipashop.ie/shop-fittings-retail-equipment/gridwall/gridwall-shelves/flat-gridwall-shelf.html
I tried again using Incognito browsing - and again I was redirected. I think it has something to do with your cookie settings - when I disable cookies I am always redirected.
Hope this solves your question. Apart from that, if you want to redirect the URL - it's better to use a 301 than a 302
rgds,
Dirk
-
RE: Keywords ranks 1 position up for 24 hours or less and gets back to its normal position.
The positions in the SERP's are fluctuating all the time. In fact, there is no such thing as a universal ranking: Google takes user's previous behaviour into account to determine the results in the SERP's (http://googleblog.blogspot.be/2009/12/personalized-search-for-everyone.html). If you try to check the rankings, you should try to do it in a more "anonymous" way. Cyrus Shepard from Moz wrote an interesting blogpost about it which you can read here.
Apart from the personalized search, Google is a learning organisme, and changing positions in the results is part of that learning process. Rather than continuously tracking the small ranking variations, it's better to track if your 'average' ranking, which you can find in Google Webmaster Tools. As long as it's stable or improving you're doing ok.
Dirk
-
RE: Change Brand Spelling after 8 years
To be honest - if it is only putting one letter in capitals most of your customers/visitors will hardly notice. If this is the only change - it will not make a difference in terms of usability.
Google changed it logo recently and there was a big buzz about it. When I asked at home - nobody had even noticed the change (4 frequent Google users).
Dirk
-
RE: 2 Top level domains - not ranking?
Hi Justin,
Next Tuesday is the 21/4 - the day the Mobile Friendly update is launched. You can find more info about it here: http://moz.com/community/q/google-s-mobile-update-what-we-know-so-far-updated-3-25
rgds,
Dirk
-
RE: Thin Content pages
@Stramark - for me putting a follow link to the customer(s) depends on the context of the site. As I don't know this particular site I just mentioned to be careful. It's up to the OP to decide if these links should be follow or no follow. If these links look natural (see https://moz.com/ugc/what-is-an-unnatural-link-an-in-depth-look-at-the-google-quality-guidelines) they are probably ok - if they are "too" optimised for certain keywords - they are not.
-
RE: Strange URL redirecting to my new site
You could check with your hosting company. When I do a reverse IP lookup - it seems the other site is hosted on the same ip address as your site (http://reverseip.domaintools.com/search/?q=radiokilimanjaro.org.uk). Maybe something got mixed up with the configuration at the hosting side.
Dirk