Hi, did the solution work?
Posts made by FedeEinhorn
-
RE: 612 : Page banned by error response for robots.txt
-
RE: 612 : Page banned by error response for robots.txt
Seems like an issue with the Moz crawler, as the robots.txt has no issues and the site loads just fine.
If you already tested your robots.txt using the Google Webmaster Tools "robots.txt Tester" just to be sure, then you should contact Moz here: https://moz.com/help/contact/pro
Hope it helps.
-
RE: Geo-target .ag domain?
Any domain can be targeted to any country you want. Take for example the popular io TLD, which is assigned to the British Indian Ocean Territory, however it has been adopted by tons of technology apps. Same happens with many other TLDs.
-
RE: Difference Between Android Browser & Android Webview
Android Browser is similar to Chrome, and it's the default browser on Android phones.
Web View is an in-app browser. It's not the installed-by-default browser, it's only ran inside of apps that load content.
-
RE: What to do with old conversion pages
If you delete the page, and it had links pointing to it, the server will return a 404 not found page, which makes you lose any authority they had to pass to the main domain or subdomain.
Using the 301 redirects at least you take a portion of the authority back to your Website. If your CMS is somehow advanced, it should be easy to hide those "expired pages" from the page list avoiding any confusion.
But again, if you redirect the page, lets say about am inbound marketing conference in Boston to the main domain that does not "serve" any kind of useful content to the user that was actually expecting the page of the Boston conference, that won't help at all. Instead, try to 301 them to something that the user may be interested in, even tho the event he was looking for is no longer available; in this case, it could be a page listing all the upcoming inbound marketing conferences (in/near Boston). By going that route you favor your site by making the pagerank flow to the other page and you also help the user, which is the primary target.
-
RE: Spam links - what would you do?
I got a little confused. Are those links you are trying to get rid off on your site? Or are those backlinks coming to your site from Spam sites?
-
RE: What to do with old conversion pages
If the event has already passed, and as you say there's no longer any value on those pages, you can just delete them. Or a good option would be to 301 them to a page that explains why the page is no longer available and offer the user other related pages that he could might be interested in.
Hope that helps.
-
RE: Best Practices for Converting PDFs to HTML
No, you won't get penalized for redirecting the PDFs to HTML versions of them. In fact, Google will like it.
Here's a video that may help you out: https://www.youtube.com/watch?v=oDzq-94lcWQ
-
RE: Will this fix my bounce rate?
As @jessconfitti said, you are 100% correct. If a user browses more than 1 page in your site, it would not be a bounce. Even if after the second page the users clicks back to whatever Website they came from.
-
RE: Would you pursue this link?
Any kind of paid directory resulting in a paid link isn't helpful from an SEO perspective. Now, if the link has an unfollow tag, that could help from a user's perspective.
I would go for it only if it really helps getting customers from the directory.
-
RE: Penguin hit Website - Moving to new domain
Thanks so much for listening my tweet and coming in here to drop a few lines Marie
I think i'll do a mix of both techniques. First, an extensive link cleanup. Then follow my first post's steps (adding the noindex meta tags in all pages as you suggested below) and then do the 301 redirect.
Will also add to the mix a redesign, just so the site looks renewed for both users and SE.
Will share insights once we complete the migration.
Thanks again!
PS: will keep an eye for the article on SEW
-
RE: Moving to https - Webmaster Tools
When you make the switch and "unplug the plug" from the new site to plug it in the new site, they are basically different sites to Google, so you need a new GWT site for the https version (you also need the www and non-www. versions, and wouldn't hurt to have both for both https and http.
What you are seeing now, is Google reindexing your content. Totally normal. It takes a while while Google removed all your HTTP pages and add the HTTPS pages to their index, there's nothing you need to worry about.
Do the other variant verifications and submit a new sitemap with the HTTPS pages to the version that is now being indexed.
I did about 7 switches so far and all went well with exactly the same pattern. Impressions start dropping on the HTTP while increasing in HTTPS (takes longer to start showing numbers in the HTTPS than what it takes for the HTTP to lower those).
Hope that helps!
-
RE: Penguin hit Website - Moving to new domain
Thanks Matt. Do you mind if I PM you with some insight questions, just to make sure I get the idea, don't wanna end up burning the other domain.
Thanks!
-
RE: On page links
To which internal links are you referring to? Those in the article text? Or within the entire page?
-
RE: Guest Blogging Question? How many links in an article?
You are doing guest blogging for the links? Then just 1 will do it to get you penalized. Forget about guest blogging for links, instead, create great content and promote it trying to get a natural link.
If you are doing guest blogging, don't do it for the links. If in one article you write it happens you have an excellent article to link to in one of your properties then link to it. But do NOT write so you can insert a link.
-
RE: Penguin hit Website - Moving to new domain
I'm sure this is a Penguin hit, 100%.
Very bad link profile, some cool links, but 90% junk. The traffic drop happened the day Penguin 3.0 was released and the site is not being outranked, it is just not ranking for those money keywords. Content is still working great, lots of hits to the blog and ranking places of the best pieces are steady and haven't change at all. Branding still doing strong too.
-
RE: Penguin hit Website - Moving to new domain
Thanks Matt,
So you say no robots disallow and no disavow file on new domain and just a 301 after a FULL link cleanup?
-
RE: Penguin hit Website - Moving to new domain
Thank you for your response.
Although starting from scratch kind of hurts, here I think is not a good option. The site is very well known in its niche, with over 100K customers and 20K regulars, so we are trying to keep the same name (also trademarked, DBA, LLC, etc).
Building a new image for the site and new content isn't the problem, however, I wonder how far Google may take their "penalty-flow" if we start a new Website just using the same name, going from domain.net to domain.com (content and design will be different), but a 301/302 or javascript redirect will be placed in domain.net after all pages are deindexed and there are no traces of the new site.
I must add this is an algorithmic penalty, not a manual. There are no manual actions applied.
-
Penguin hit Website - Moving to new domain
Hey!
I am working on a Penguin hit Website. Still ranking for all brand keywords and blog articles are still being returned in Google SERPs, but the website is showing up for only 3 or 4 money keywords. It is clearly a penguin hit as it was ranked 1st page for all money keywords before latest update (3.0). We already did a link cleanup and disavowed all bad backlinks. Still, the recovery process could take over 2 years from previous experience, and in 2 years, the site will suffer a slow death.
Solution: We own the .com version of the domain, currently being served on the .net. We bought the .com version about 6 years ago, it is clean and NOT redirected to the .net (actual site).
We were thinking about moving the whole Website to the .com version to start over. However, we need to make sure Google doesn't connect the 2 sites (no pagerank flow). Of course Google will notice is the same content, but there won't be any pagerank flowing from the old site to the new one.
For this, we thought about the following steps:
- Block Googlebot (and only googlebot) for the .net version via robots.txt.
- Wait until Google removes all URLs from the index.
- Move content to the .com version.
- Set a 301 redirect from .net to .com (without EVER removing the block on googlebot).
Thoughts? Has anyone went over this before? Other ideas?
Thanks!
-
RE: Multiple Domain Names Pointing to One URL
You're welcome.
Then do what the big giants do. Redirect each domain to the appropriate page in your main domain: product-domain.com 301 redirects to domain.com/product-page
-
RE: Best practice SEO for images added via WP
It doesn't really matter that much if you use the alt text, and title well to describe the image. However, I always recommend (and do it myself) renaming the image to match the title, in your case something like orange-campervan-cornwall-uk.jpg.
Always use dashed to separate words, not underscores
-
RE: Multiple Domain Names Pointing to One URL
If those indexed by Google are showing up as the extra-domain.com instead of the primary domain.com and showing the description of the primary domain.com then you are risking your site to a backdoor penalty, which will cause a lot of harm.
My best advise: 301 redirect all those domains to the corresponding page of your primary domain.
Example:
- extra-domain.com 301 redirects to domain.com
- extra-domain.com/page1 301 redirects to domain.com/page1
Hope that helps!
PS: What was the idea behind purchasing 70+ domains? are those product domains? such as for example apple does (itunes.com redirects to apple's itunes page)?
-
RE: Best practice SEO for images added via WP
Hi James,
Each field is actually intended for something different:
- Title: should be what is the image. In this case: orange campervan
- Caption: Is what shows up under the image describing it to users visiting the page. In this case: orange campervan in cornwall, UK blah blah.
- Alt Text: Should be an image description that appears when the browser can't (for some reason) display the image. In this case can be either "orange campervan", "Image of orange campervan" or whatever you think describes best the image when somebody can't see it.
- Description: Is just that, a short, brief description of the image contents.
Hope that helps.
-
RE: My data is stuck and hasn't updated from a few weeks back
Glad to hear!
Didn't expect that my step one could resolve the issue lol (just kidding)
-
RE: Google indexing nofollow links
Where are you seeing those links as indexed by Google? In Google WMT? OSE?
-
RE: My data is stuck and hasn't updated from a few weeks back
There are a few solutions you may try.
- Try asking for a crawl manually: https://moz.com/researchtools/crawl-test
- Remove the campaign and create a new one, if you have open slots, just create a new one instead of removing the current one to avoid data loss.
- If you are still getting old data, then you should contact Moz's support at help@moz.com
Hope that helps!
-
RE: IP Address Geolocation SEO - Multiple A records, implications?
Is terms of SEO, the server location does not carry any value. However, if you site loads really fast to France visitors, it just makes sense to rank your site higher to French visitors (yet again, confirming that server geolocation does not give you any SEO benefit).
As for your question, what you are trying t achieve isn't an easy task and there are entire companies dedicated to just that, DNS based on geolocation (Anycast DNS, Geo-aware DNS), which is what almost all CDNs do, serve the content of a Website from the closest server to a person connecting to a Website. My recommendation? try to get the fastest server you can in the area you mostly want to target (Europe? then get a server that works fast in europe) and then use some CDN service like maxcdn, or cloudflare to serve static, cacheable content from different locations and speed up site loading as much as possible.
That being said, Google offers some tools to help you target specific Countries, Languages regardless of the server location under Google's Webamster Tools.
Hope that helps!
-
RE: .co.uk and com: Independent sites, but owned buy us , sharing some product information
The way to solve this is actually using both Canonical and hreflang tags in all the pages that have duplicates. I remember answering a similar question that I am sure you will find it useful here: http://moz.com/community/q/custom-hreflang-tags-in-wp-using-with-yoast
-
RE: Is Link Weight Lost Moving from HTTP to HTTPS?
According to Google's saying on 301 redirect link dilution, you should lose some of the weight carried by those links.
However, as Google just revealed that HTTPS sites get a slight, minimal in fact, chance to do better in search results, HUNDREDS of sites are moving to SSL. Of all the pages that I've been tracking and reading, there was no change at all in rankings by switching to HTTPS, anyway is too soon to assume conclusions, the statement was released last week, so all those sites I am reading about are just "too new" in HTTPS to take conclusions.
My guts tell me that these 301 should act the same way as any other 301 does (diluting some of the link weight), or it could be yet another way to cheat Google, but will see what happens.
I myself moved my site over the weekend to HTTPS with 301, haven't see any change, but I guess what I can loose in the 301 redirects can be gained by using SSL?
-
RE: Your search - site:domain.com - did not match any documents.
Hey,
It depends on the penalty, if any.
If you have no manual actions under the Webmaster Tools, that's a hint. However, it could be an algorithmic penalty.
If the penalty, again, if any, applies to the whole site, then changing the site's contents while making sure your entire site (backlinks too) is in compliance with Google's quality guidelines, then the penalty should be revoked.
If the issue is actually only the fact that Google can't access the site, then check why, fix that ASAP and you should be ranking again in no time (check using the fetch as Googlebot to make sure that is/isn't it first).
To sum up, you should run an extensive analysis on links, content, server responses errors and find the cause of the "penalty", then work on fixing it to start ranking. Once you do, you can continue with the other SEO/design tasks.
As I said before, opening a thread in Google's Webmaster Help forums could be of much help.
All the best!
-
RE: How to add author avatar to my blog
That totally depends on the template you are using. If the template does not have that as an option, then you can code your own author snippet to show in each post.
If you are unfamiliar with Wordpress coding, I would suggest you hire an experienced designer/coder to achieve what you are looking for. Freelancer.com is a good and cheap source of coders.
Hope that helps.
-
RE: Change Media Wiki urls to - instead of _
Hi Noah,
I suggest you stop by Stackoverflow (http://stackoverflow.com/), where development questions can be better responded.
-
RE: Best Approach to Redirect One Domain to Another
Hmm... my best guess is that if the content remains the same, that means that you have the same contents on boo.com and foo.com, then just simply do a page by page redirection, that will carry as many value as possible to the new pages.
However, you if you do not have the same pages available on both domains, then there are a couple of things you can do:
- Not the exact same content on the new domain: redirect to what you think is the best match.
- No similar content on the new domain:
- Option 1: Redirect to a page (sort of a landing page) showing similar pages that the user might be interested in.
- Option 2: Redirect to a landing page or homepage.
Hope that helps!
-
RE: Your search - site:domain.com - did not match any documents.
Holy... this IS weird.
Checked the robots.txt and there's nothing blocking the indexing, robots meta tags are present with INDEX.
You clearly need urgent access to Webmaster tools, seems like a penalty for pure spam or something like that, as there's no 1 single page indexed, while there are other sites linking to it.
What I would do? Before doing any further onsite SEO, get that resolved. Go to Webmaster tools and check any manual action, message, etc. Try the fetch as googlebot. Then go to Google's Webmaster forums and ask, usually someone from Google jumps in.
-
RE: Your search - site:domain.com - did not match any documents.
Care sharing the real domain?
-
RE: GWT Change of Address Keeps Failing
I guess Google needs to confirm that you own both sites when you do the URL change. When you ask for a change of address, as you edited the DNS records, the old Website no longer exists and cannot be verified.
Can you keep both Websites alive at the same time? I mean, leave the site up at rethinkisrael.org and also at www.fromthegrapevine.com, both at the same time (that shouldn't be too hard) and then try the Change of Address again. Once completed, you can remove or 301 the old rethinkisrael.org
Hope that helps.
-
RE: Huge Spike in Direct Traffic from IE7
ADROLL is the one to blame: http://www.seroundtable.com/adroll-invalid-traffic-18922.html
If it isn't messing with your server services and just spiking in Analytics, I would just let it there and enjoy the (probably fake) traffic.
As there's actually no referrer and nothing tiding that traffic to a source, there's not much you can do aside of blocking some IPs.
You can also test-drive CloudFlare (very easy to setup, and free) which filter fake traffic (among other benefits) using known IP addresses and browser integrity checks prior to send the hit to your server.
-
RE: Huge Spike in Direct Traffic from IE7
Have you signed up for any traffic exchange, affiliate program or similar? Purchased a service on fiverr or similar?
If those hits are not causing any damage to your server, then just ignore it... If it is causing damage, you might be a victim of a DDoS attack...
First, make sure you haven't ordered a service that results in that traffic, we'll go from there.
-
RE: Where would I start with optimizing my site
I would recommend you go by the MOZ's Beginner's Guide to SEO here: http://moz.com/beginners-guide-to-seo
-
RE: Custom hreflang tags in WP & using with Yoast
Dan,
If you have an English page that is also available on Turkish (same content but rewritten/translated) then an hreflang tag is recommended, not mandatory, but recommended. Although as you said you are already writing in Turkish and geotargeting in GWT, there are other engines too, that regardless their market share, shouldn't be overlooked.
HOWEVER, if you have a page in English not matching a Turkish page, then you don't need the hreflang in that page. The tag is only used when the same content is available on other language/location to tell engines which version they should serve.
What you mention about using x-default and removing the canonical is nonsense. Those are 2 different things and one would not interfere with the other. The plugin I recommended does not mess with Yoast, leaving the canonicals as they should be and adding the hrefland tags as specifies. Check this example on my site English and Spanish using both Yoast and the hreflang Manager plugin:
- English: http://viberagency.com/blog/6-reasons-shouldnt-put-intern-charge-companys-social-media/
- Spanish: http://es.viberagency.com/blog/6-razones-por-las-que-debes-dejar-un-pasante-cargo-de-los-medios-sociales-de-tu-empresa/
Check the source code, both have their canonicals and hreflang tags just fine. We chose to use the English version as the default, as you can see in the x-default.
The hreflang tags should be used only when the content is the same (but targeted to a different audience). Of course of the translation from one language to the other some lines must be rewritten to make sense.
In my example, I used two very similar (if not the same langs), however there are things that change, but those are minimal (take as an example a car "hood", in England a "bonnet"). As those are such minimal changes, I don't think a specific version for GB is needed if you are already serving a US version (that's up to you). In that case (1 english version to all english speakers), you only specify the language, instead of the Language and Region:
<link rel="<a class="attribute-value">alternate</a>" href="http://www.example.com" hreflang="<a class="attribute-value">en</a>"/>
Now, just to make sure we have an example that DOES apply a different GEO in en-US and en-GB, could be a page that explains what are car repair centers, plus below it shows a list of repair centers. In these scenario, the content is the same, but the list of repair centers change, you would like to display those in GB to your GB audience (still, from my point of view, useless, but was just an example).
Hope that clears it up
-
RE: Custom hreflang tags in WP & using with Yoast
Hey Dan,
If I understood correctly, you should use both. Canonical tags are used tell search engines that the content is located on the canonical content, while hreflang points which version should be served to each visitor depending on the user's location/language.
If you Yoast, then they already handle the canonical tags and there's nothing you need to do. For the hreflang, if you have at the moment only 1 version served to all visitors, then those shouldn't be used. However, if you have 2 versions quite similar, like en-US and en-GB then you will need to choose the one that's default, let's say the US version and have the following on each version:
en-US:
- Canonical pointing to it.
- Hreflang x-default pointing to it
- Hreflang en-US pointing to it
- Hreflang en-GB pointing to en-GB version
en-GB:
- Canonical pointing to it.
- Hreflang x-default pointing to en-US
- Hreflang en-US pointing to en-US
- Hreflang en-GB pointing to it
This applies if the en-US and en-GB versions are NOT exactly the same. If the language changes (that's why you create a specific version to each country) you need a canonical in each version pointing to itself.
If the en-US and en-GB have the same contents, then the canonical should point to the en-US version (but there's no need to have the en-GB version really, which makes it useless / expendable).
As you mention that at the moment you do not have any extra langs/regions, then you could leave the tags empty or better remove them.
There's a plugin for wordpress that handles hreflang tags (paid) hreflang Manager
Hope that helps!
-
RE: Google Shopping Feed being blocked by robots.txt
Did you try manually fetching as Googlebot the robots.txt file via the webmaster tools? If not, do that, then click submit to index. Then do the same thing for some of the images or better if you have a file linking to all images (like an image sitemap). Once that is done, let Google recrawl everything for a few days (even a week) and try again.
-
RE: Is there a way to get a list of Total Indexed pages from Google Webmaster Tools?
Joanne,
I'm afraid there's no way to know which pages are actually indexed from your Webmaster Tools. You can use a simple search in Google: site:domain.com and it will list "all" your indexed pages, however, there's no way to export that as a report.
You can create a report using some "hack". Login to your Google Drive, create a new spreadsheet and use the following command to populate rows:
=importXml("https://www.google.com/search?q=site:www.yourdomainnamehere.com&num=100&start=1"; "//cite")
This will load the first 100 results. You will need to repeat the process for every 1000 results you have, changing the last variable: "start=1" to "start=100" and then "start=200", etc (you see where I'm going). This could really be a pain in the butt for your site's size.
My recommendation is you navigate your own site, decide which pages should be removed and then create the robots.txt regardless what google has indexed. Once you complete your robots.txt, it will take a few weeks (or even a month) to have the blocked pages removed.
Hope that helps!
-
RE: Tumblr, Blog or Both
Does your client have that much content to regularly update a tumblr and a custom on-site blog? I would actually go with the on-site blog.
What's your customer's reasoning behind the change?
-
RE: How to optimize drop down menus?
What do you mean by "optimize drop down menus"?
I am guessing you are referring to "how to let search engines crawlers to recognize your menus"??. Well, there's really no need to do anything.
Drop down menus actually use CSS or Javascript (or a conjunction of both) to load. Search engine crawlers are already smart enough to recognize and run JS/css code, that means that there's no need for you to do anything (unless your menus are really messy and unusable to users..?
Hope that helps!