I think you can find the answer to your question in this other thread: http://moz.com/community/q/what-are-benefits-to-develop-large-html-sitemap
Best posts made by FedeEinhorn
-
RE: Benefits of human-readable sitemap (e.g. domain.com/sitemap)?
-
RE: Url for Turkish, Russian, Chinese, Arabic, Vietnamese and Arabic websites
That's also possible.
Check this post: http://uxmag.com/articles/a-url-in-any-language
In any language they will work fine due to Internationalized Resource Identifiers, when you copy paste them they might look weird because of that, but they will still work in a browser.
Edit: offering the URL also translated will be helpful for SEO too. Imagine a user searching for your site in their language, they will probably click on the one that it's written in their language that on the the one that has an English URL.
-
RE: If parent domain is www, does it matter if subdomain on a different server is non-www?
Basically, www domains are a subdomain of example.com. You don't need to have your subdomains like www.service.example.com, if that was what you were asking. That's just poorly structured URLs IMHO.
There shouldn't be any different for edu or gov domains.
-
RE: Mixed English and Arabic URLs
Not at all, if the URL language matches the content of the page then it should be even better that using a full English URL.
However, I would suggest using domain.com/ar/blog/عنوان بلوق عربية طويلة حقا على شيء مثير جدا للاهتمام for the arabic blog and if there's an english or french version of that page, you can also implement hrelang on them and point them to domain.com/blog/English-blog-title-really-long-on-something-very-interesting
Hope that helps!
-
RE: 301 redirects within same domain
You may want to check this video:
-
RE: What is the best way to deal with https?
What you are doing is completely alright.
Make sure your canonical tags are set properly to point to the HTTPS versions of the pages.
Hope that helps!
-
RE: Adding Extra Domains
If you just do a 301 of the entire domain then you are completely safe.
However, if you decide to go another route and build a landing page, specifically designed to target those keywords then the page will be penalized as a backdoor page + it could take a hit on EMD too.
For example, take a big brand: Apple.
www.itunes.com -> http://www.apple.com/itunes/
www.iphone.com -> http://www.apple.com/iphone/
All redirect to their main domain: apple.com
*It must be a 301 redirect of the entire domain. Which then gives me the following question: keyword reach domains aren't worth it, branded domains are. You don't see www.cellphone.com redirecting to apple or any other cell phone brand page, instead, they get their branded domains, in this example: iphone.com, itunes.com, mac.com, ipad.com, etc. People isn't going to look for a keyword reach domain, instead if they enter in their address bar "green apple" their browser will perform a search for those keywords on Google, Bing or whatever browser they have, which will NOT return your keyword reach domain as it is actually nor indexed by Google as it is just a redirection.
To sum up, branded domains = Perfect. Keyword rich domains = spending money.
-
RE: IP Address Geolocation SEO - Multiple A records, implications?
Is terms of SEO, the server location does not carry any value. However, if you site loads really fast to France visitors, it just makes sense to rank your site higher to French visitors (yet again, confirming that server geolocation does not give you any SEO benefit).
As for your question, what you are trying t achieve isn't an easy task and there are entire companies dedicated to just that, DNS based on geolocation (Anycast DNS, Geo-aware DNS), which is what almost all CDNs do, serve the content of a Website from the closest server to a person connecting to a Website. My recommendation? try to get the fastest server you can in the area you mostly want to target (Europe? then get a server that works fast in europe) and then use some CDN service like maxcdn, or cloudflare to serve static, cacheable content from different locations and speed up site loading as much as possible.
That being said, Google offers some tools to help you target specific Countries, Languages regardless of the server location under Google's Webamster Tools.
Hope that helps!
-
RE: I have a question regarding parking good value domain.
100% agree.
However, I think they already parked the URL. Did you?
As if you did, it will start loosing its ranking (if it didn't already).
-
RE: I need an SEO Specialist to take a look at a few things for me
Hi,
You can check the list of recommended companies Moz created here: http://moz.com/article/recommended
Hope that helps!
-
RE: Keyword Duplication in the title
My 2 cents:
Nothing to worry about. You won't repeat the word on EVERY page on your site. You are just doing it in the homepage. Plus, using the brand in all the other pages, will give Google (and others) a clue about the brand name:
Index: NLP Training and Certification Center | NLP and Coaching Institute
2nd Page: 2nd page | NLP and Coaching Institute
etc.Hope that helps.
-
RE: Are flip books - pdf readers on websites SEO friendly?
Google indexes PDF contents and files almost like regular HTML. Links are followed, you can block indexing, etc. Just like regular HTML. The only thing Google can't index from PDFs are images, unless you have it in HTML format elsewhere.
I would definitely recommend converting those PDF menus to regular HTML.
You can find more info here:
What file types can Google index?
Can google fully index pdf files?
Hope that helps!
-
RE: Are Dated News Considered Low Quality Content?
I would go with another solution tho. I will start analyzing the pages where you have the highest CTR, then on those the highest bounce rate. Are those visitors leaving because the content is useless or because they finished reading the piece and there's nowhere to go? Consider all variables and try improving the content pages to include links to related posts kinda "You may find this articles interesting", or "Related Articles", etc. Try improving the site navigation, if you are getting CTR from search, then do whatever you can to keep those visitors there, by improving user experience and navigation.
Hope that helps!
-
RE: Moz showing 404 error on one of my sites
Are you under some kind of reverse proxy that could be blocking Rogerbot's IP address? I tested using several user agents, even googlebot and all returned a 200.
It is either you are blocking Rogerbot's (moz bot) IP addresses or an issue on Moz end, in that case, you can contact them here: help@moz.com
Hope that helps!
-
RE: The wrath of Google's Hummingbird, a big problem, but no quick solution?
Everything that Alex said PLUS fill a reconsideration request. Wait a couple of weeks for Google's response. Your request MUST explain all the steps you made to clean up your site and be in compliance with Google's TOS (with proof and all)!
-
RE: Should I consolidate pages to prevent "thin content"
There are a couple other scripts/enhancements you can do to speed up the site:
- CDN - Loading images using a CDN (Cloudflare offers that for free).
- Image optimization
- Lazy loading the images (Also available for free using Cloudflare)
- etc.
-
RE: Do 404 Pages from Broken Links Still Pass Link Equity?
Equity is passed to a 404 page, which does not exist, therefore that equity is lost.
-
RE: Detail page popup questions for real estate client
Your developer is correct (partially) :).
You see, that DOES affects SEO but in a good way. Over time, Google has learned to recognize what's best for the user, and from my personal point of view, having a "lightbox" (that's how that "window" is called) is far better than opening a new page if you can present the property details better. Whatever looks better for the user, then it will also look better for Google.
Google is also capable of running and understanding Javascript, therefore you shouldn't have any problem, even less, when the link actually points to the page, so search engines unable to run Javascript can still scrape the site perfectly.
Hope that helps!
-
RE: Whay are low-quality exact match domains still ranking well for our biggest term?
Report the results to Google, not that they will actually care, but if you open a thread in the Webmaster Help Forums, someone from Google may see the thread and tell you what's going on.
Anyway, that's completely normal and I see that in every possible niche. Bad sites being ranked above good ones, they are just "awaiting" a drop, it will come, the question is when...
Try to get in touch with someone at google, tweet Cutts, open a thread in WHF, write about it, get the word out. If you really deserve a better spot then all that may get Google's attention.
-
RE: How to use canonical with mobile site to main site
Hey Mike,
So basically if the page is unique and there's no other copy with another URL you shouldn't use the canonical tag in that unique page pointing to itself?
I know it's like saying "the original copy of this page is here" while "here" is the same page, but that solves lots of duplicate content issues that might arise while using URL rewrite.
-
RE: Google Manual Action (manual-Penalty)- Unnatural inbound links
Claudio,
Alright then you have it right (the www/non-www thing).
First go over all your shady links and try to have them removed or no-followed. There are online tools that can research contact forms, emails, etc from those links, like Link Detox from LinkResearchTools (I think it is).
Run a full report and include all the links that are downloadable from Webmaster Tools, and those from OpenSiteExplorer. By doing that, it will analyze every possible link you have. Then filter all the shady ones, and send an email (a template of course) to each webmaster (if there's no email, try searching for a contact form). Point them where's the link that should be removed in their sites, make their job easy so they actually do it.
Once all have been contacted, wait a couple of weeks for the results, run the report again and create a disavow file with all those links that were not removed.
Wait a couple of weeks.
Get on the reconsideration request (same for both www/non-www); again send them proof of your work, share the spreadsheet you created while removing the links, the emails, some responses, show some removed links, etc.
It could take a while to get your rankings back if the reconsideration is approved, but unfortunately I've read cases where their rankings were never returned.
-
RE: Handling Multiple Restaurants Under One Domain
Restaurant names/brand names are different? If that's the case I'd suggest you stick with one Website for each one of them and if you'd like you can link them together with a nofollow link.
From an SEO perspective, you can get more authority if you use a single domain for both, but if I look it from the user point of view, 2 sites seems much more appropriate, and Google says every day: think, and build to users.
It is ultimately your choice, those are just my 2 cents
-
RE: Bot or Virus Creating Bad Links?
Seems like your client's fault. Even though you said they swear they did not create the links, no negative SEO will waste the time or efforts on giving links to a site that was just launched and it's not even ranking well.
Perhaps your client did buy the links but without even knowing, there are places where links are sold like a "Be in the first page on Google", so common people do not associate that with spam links...
-
RE: Trying to SEO a site that used Header Tags for Design
Hi Chris,
It's been proven that the main issue with h1 and h2 tags is that people uses them everywhere within the design, while there's a reason for the 1, 2, 3, etc. h2 is used to denote your primary heading, h3 to denote your h2, and so on.
You could change the other h1, h2 (that are less important) and use h3 and h4 accordingly by styling the h3 and h4 in the CSS.
The only bad thing to do there would be to hide (display:none) any of the headings.
Hope that helps!
-
RE: Author rank
Google suggests to link to the REAL author, in this case you. Articles are written by persons, then if you are writing an article, research findings, etc, I would go with linking to your personal profile as the author "rel=author".
If the article is related to your business, and you do not want any personal attribution, then set the author to your page.
Hope that helps!
-
RE: Investigating a huge spike in indexed pages
Have you contacted the Google Webmaster Help forums? As that seems to be a glitch in Google.
How many pages are scraped by Mozbot? If the amount that mozbot shows is different, then you should either sit and wait until Google removes those indexed pages or create a conversation on the forums so someone at google can give you a hint of what is going on.
-
RE: Articles URL
While don't getting the page URL (which is the article title "what-is-the-visa-on-arrival-how-to-get-it.html" remove the .html and then add an extra field to the DB with a the URL friendly name, for this article, that field will contain "what-is-the-visa-on-arrival-how-to-get-it" then you search your DB for that field and you get the entry ID (of course, adding an index to that new column).
This one is a good solution for small DBs, but when scaling, you may want to use an integer to get the post ID instead, say "http://www.vietnamvisacorp.com/news/245/what-is-the-visa-on-arrival-how-to-get-it.html"
You get the ID plus the title and avoid any issue that may come if you accidentally post an article with the same title (you can have this prevented by looking into the DB before saving the article and if the title exists, you can either change the title or add something to differentiate them, say "-2".
Hope that helps!
-
RE: Google Not Indexing XML Sitemap Images
Within that robot.txt file on the CDN (which one are you using?) have you set to allow Google to index them?
Most CDNs I know allows you to block engines via the robots.txt to avoid bandwidth consumption.
In the case you are using NetDNA (MaxCDN) or the like, make sure your robots file isn't disallowing robots to crawl.
We are using a CDN too to deliver images and static files and all of them are being indexed, we tested disallowing crawlers but it caused a lot of warnings, so instead we no allow all of them to read and index content (is a small price to pay to have your content indexed).
Hope that helps!
-
RE: Salvaging links from WMT “Crawl Errors” list?
Exactly.
Let's do some cleanup
To redirect everything domain.com/** to www.domain.com you need this:
RewriteCond %{HTTP_HOST} !=www.domain.com [NC]
RewriteRule ^(.*)$ http://www.domain.com/$1 [R=301,L]That's it for the www and non-www redirection.
Then, you only need one line per 301 redirection you want to do, without the need of specifying those rewrite conds you had previously, doing it like this:
RewriteRule ^pagename1.html(.*)$ pagename1.html [R=301,L]
That will in fact redirect any www/non-www page like pagename1.htmlhgjdfh to www.domain.com/pagename1.html. The (.*) acts as a wildcard.
You also don't need to type the domain as you did in your examples. You just type the page (as it is in your same domain, you don't need to specify it): pagename1.html
-
RE: Issues with centrally hosting your own affiliate links?
All the methods you mention are valid as long as you nofollow them.
And if having one domain to redirect simplifies your work, then go for it.
Remember to always nofollow those links.
-
RE: Avoid Keyword Self-Cannibalization. Please Help
You are welcome!
We posted an article about a week ago that includes a list of tools you can use to do some keyword research, almost all free: http://www.fulltraffic.net/blog/85067/5-amazingly-effective-ppc-keyword-research-tools/
WordStream's Keyword Niche Finder as a really good choice while ubersuggest will also provide you with lots of ideas. Both offer the option to download the findings. Try researching some long-tail too. Fake diploma is basically the same as "not real diploma", "diploma without classes" and so on.
-
RE: Google Not Indexing XML Sitemap Images
Hmmm I step off here, never used cloudinary.com or even heard of them. I personally use NetDNA, with pull zones (which means that they load the image/css/js from your origin and store a version on their servers) while handling cropping/resizing from my own end (via PHP and then loading that image, example: http://cdn.fulltraffic.net/blog/thumb/58x58/youtube-video-xQmQeKU25zg.jpg try changing the 58x58 to another size and my server will handle the crop/resize while NetDNA will serve it and store for future loads).
-
RE: Internal followed links 1
I don't think is a glitch, you just started the campaign Moz never crawled your site before so it actually doesn't have any information yet. Give it a day, I think the first crawl is run after you start the campaign. However, OSE info could take a while to update. You should check your crawl diagnostics instead if the site is too new.
-
RE: The page is missing meta language information.
Then you shouldn't add any meta. That tag is used to specified that your site is either targeting a specific Language and/or Country. As you aren't, there's no need to use the tag.
-
RE: How can i stop such links being indexed
If I got it right, what you need to do is add a canonical tag to the definitive version of the URL (the one you want indexed): https://support.google.com/webmasters/answer/139394?hl=en
Plus a meta noindex to those that you don't want to have indexed: http://googlewebmastercentral.blogspot.com/2007/03/using-robots-meta-tag.html
Hope that helps!
-
RE: Moz reporting appropriate Canonical tag usage but no canonical tag on page !?
Although canonical tags are not intended to point to themselves, I mean the page http://www.domain.com/page shouldn't have a canonical tag pointing to itself: "http://www.domain.com/page" I found it very useful. Specially now that sources add their own query string (ie http://www.domain.com/page?utm_source=twitter), to minimize the chances of having a duplicate content issue.
-
RE: Suggestions on Website Recovery
Is is widely known that manual penalties do expire. If you have fixed the issued, cleaned up the backlink profile + disavowing those that were impossible to remove, then perhaps the penalty just expired and as you are no longer in violation to Google quality guidelines you haven't receive the penalty again.
If on another scenario, the penalty expired and you didn't do the cleanup, then most likely the penalty will be back to bite in the a***.
I always heard that penalties do expire but no one was able to tell how long it took, I think you are the first one that can verify that penalties do expire and don't come back unless you haven't fixed the issue
Anyways, after a penalty is revoked/expired, it will take some time, probably month to see the changes.
From my point of view, you are in the right direction, building content to earn backlinks, that's the way to go.
Hope that helps!
-
RE: Avoid Keyword Self-Cannibalization. Please Help
Writing content with the keywords you want to rank is great, but just don't overuse them. There's a thin line between mentioning the keyword and stuffing an article with it. There's been a lot of chatter about the percentage of how many times your keyword should be mentioned within the text, however, I suggest you go with a "user target" in mind. Don't build content for search engines, build it for users!
-
RE: How does Google index pagination variables in Ajax snapshots? We're seeing random huge variables.
I think you are right. Google is fishing for content. I would find a solution to make those URL friendly by removing the hash and using some URL rewrite and pushState to paginate that content instead.
Here's a previous question that may help: http://moz.com/community/q/best-way-to-break-down-paginated-content
-
RE: On the on page optimization page, I found out that there are 2 contributing factors which are opposite to each other. "No More Than One H1 Tag" and "Appropriate Keyword Usage in H1 Tag"
Exactly what Moz is telling you. There are 2 H1 tags in that page.
The first surrounding the logo, which is COMPLETELY USELESS and by the way keyword stuffed as that text does not show to the user as it gets replaced with the Logo Instead, that, if in the case of browsers without images (almost non-existent already) it should only have 1 or to words, exactly as the logo.
The second H1 tag in the page has: "Why Wrapped Car and Vehicles advertising?" in it. That's the one you should keep.
However, seeing your page, it seems that the following text is more important: "Wrapped Cars and Vehicles", which actually uses an H2. I would consider putting that as an H1 and focusing the keywords you need on that phrase instead.
Remember the CSS changes you are going to need too.
PS: Remove that h1 tag from the logo.
-
RE: Noticed a lot of duplicate content errors...
For the tags, easy: noindex them, those pages don't offer any value to search engines.
For the categories you could perhaps go the same way, or use another approach and only use 1 category per post.
-
RE: Infinite Scrolling in On-Page Search Results
Philip,
There's been a question similar to yours a few weeks ago, I suggest you check it out, it might respond your question as well:
http://moz.com/community/q/best-way-to-break-down-paginated-content
-
RE: On the on page optimization page, I found out that there are 2 contributing factors which are opposite to each other. "No More Than One H1 Tag" and "Appropriate Keyword Usage in H1 Tag"
No problem! You are welcome!
FYI, the page you gave me has almost no content unless a few lines and 3 pictures, plus the header with a little animation and the footer. However, it has over 10 scripts and over 5 CSS files. Heck, the first 135 lines are just the header, while the actual content of the page is from line 258 to 274 (16 lines). I would review all those wordpress files and remove as many as possible from the site while trying to merge all the others. You are making over 30 requests for a page that has only 16 lines of actual content.
-
RE: Best way to noindex long dynamic urls?
I wouldn't put a noindex meta on them, instead I would consider using a canonical tag pointing to the page that lists all the villas.
Anyway, what programming language are you using?
-
RE: Google Showing H1 Title Instead of Doc Title in Search Results?
Hey Stephane,
The reason: Google thinks your H1 content is more likely to interest the user than your title.
Control: None. However, you should do some tests analyzing why Google thinks your H1 is more attractive than your title and change it to better target your audience.
Hope that helps.