Hey Peter,
The Analytics code could have helped to get the site indexed. Or even a G +1/Facebook Like/share/Stumble/etc button clicked by error.
@Rajat
Doing the search Peter suggested should return any indexed page.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hey Peter,
The Analytics code could have helped to get the site indexed. Or even a G +1/Facebook Like/share/Stumble/etc button clicked by error.
@Rajat
Doing the search Peter suggested should return any indexed page.
You have done the necessary steps (disallowing in robots plus setting a noindex tag). There's shouldn't be anything to worry about. If you want to be entirely sure, you can add some HTTP authentication to the folder so only those knowing the credentials can access (you could find that some robots may not follow the disallow flag or noindex tag).
Not directly. However you could implement an internal tracking method by looking at the referrer "google.com", "bing.com" and "yahoo.com" (+ any other major source you would like to track) and then set a session variable to track the visitor around the site and finally, if he/she converts you can fire an Analytics event based on that session variable.
Thomas, he already said that the HTTP site is being 301 redirected to its HTTPS version. So why adding all that about how to do 301s on Apache on nginx?
What you are doing is completely alright.
Make sure your canonical tags are set properly to point to the HTTPS versions of the pages.
Hope that helps!
I would suggest you implement the plugin SEO by Yoast and then follow all the configuration steps. It will help you solve all your permalink options while providing information on which will be the best option.
Hope that helps!
Not at all, if the URL language matches the content of the page then it should be even better that using a full English URL.
However, I would suggest using domain.com/ar/blog/عنوان بلوق عربية طويلة حقا على شيء مثير جدا للاهتمام for the arabic blog and if there's an english or french version of that page, you can also implement hrelang on them and point them to domain.com/blog/English-blog-title-really-long-on-something-very-interesting
Hope that helps!
Are the same classes in the future link to the same page? are you using canonical tags correctly? Your URL should help diagnose the problem and guide you better,
A couple things come to my mind after a quick look of your backlink profile:
Hope that helps!
Unfortunately, I experienced that issue in the past and had to create a new one instead. Write an email to moz help to see if they can reactivate it for you: help@moz.com
Have tried doing it on the PRO version?
Does your sitemap include duplicate pages or pages that crawlers wouldn't want to list? (like search results pages, pagination of duplicate pages, etc.)
How do you know that you have 50K indexed pages if GWT reports 700 and a site: search reports over 50k?
Hey Christopher,
OSE only reports 4 incoming links, that not even near to what you will need to rank for the terms you are probably targeting. The competition in the real estate business is fierce,
Anyway, there are a few things that may help that you can do now:
Those are just a few enhancements I can suggest...
Hope that helps!
It can help. Remember, all search engines want is the same that users want. more information. If the content you add is unique to your site and to the web, then it should help.
You don't see traffic from bit.ly as they actually redirect the user without serving any content, therefore is just a redirection, instead, you should see the actual referrer, facebook, or whatever. However, if you are getting the hit from a facebook page, it is probably an HTTPS page, and therefore, if redirected to a non-SSL page referrer information isn't passed along. However, you could see something like facebook.com/u.php?.... You should use Analytics tracking variables to better understand from which posts are those visitors coming: https://support.google.com/analytics/answer/1033867?hl=en
As for your second question, just like you can interpret the redirection, so can Google or whatever search engine scrapes the page looking for signals. It won't matter if you pass them through bit.ly or services alike, the link ends up in your site (as long as there's a 300 redirection in place, which it is in bit.ly).
Hope that helps!
I suggest you take a look at the following Q&A explaining the implications of having affiliate links pointing directly to your site and how to deal with that: http://moz.com/community/q/blocking-affiliate-links-via-robots-txt
Hope that helps!
I suffered a similar issue. Although I do recognize I bought links back in 2007 - 2010, we were still getting about 10 new links every day, probably from some negative SEO campaign.
Got a penalty as a "partial match" affecting "some incoming links". We worked over a year to clean the backlink profile and still every reconsideration request failed as Google returned some links that were actually discovered after we downloaded all the links from all the possible sources.
What we did to solve this: downloaded the domains from both GWT and OSE, and instead of going link by link and spending at least 2 month contacting AGAIN, we added domain:[domain-to-disavow] and did it for almost 90% of the profile. Even domains that were linking to us honestly, but could possibly be seen as spam (like a link for my own profile in a dev forum).
After sending the disavow file with about 1500 domains, we didn't wait and sent the reconsideration request, with all the previous reconsiderations and their outcome and the decision we made to give it a "last shot". 2 weeks later we received their response, Penalty Revoked.
We started seeing ranking increases from the next day.
Those suggesting not to add the SSL pages to the HTTP sitemap are using data back from 2007, when indeed Google showed an error on those sitemaps listing both HTTP and HTTPS pages as they were being recognized as different domains. Those days are long gone. Google had evolved and can now handle sitemaps with both HTTP and HTTPS pages just fine.
I would definitely go with www.example.com/cityname. It isn't just easier to maintain, but it will be much easier to rank as any link juice received in www.example.com/cityname1 will also affect www.example.com and www.example.com/cityname2, which in the end, will make even easier for new cities that you add later.
Having domains including the city and linking to each other will probably be seen as an EMD which will affect all domains.
There's no "must" here, you can do it if you want. But as long as you are redirecting ALL non-www to the www version of it, you have nothing to worry about.
I redirect the non-www to the www but I set that up anyway, just in case
The redirects will cause a small dilution of the pagerank on those pages, just the same amount as having a single link in the trips/country to destinations/country.
But forget about the pagerank, do you expect to get more search traffic from search if the URL says trips or destinations? It all comes down to what you want achieve. The URL you have now doesn't look bad at all, and you can probably keep it by tweaking the code a bit without redirecting all pages to the destination/country page, so again, study the pros/cons:
Pros:
Cons:
I only suggest to implement redirects if they really offer a better option, but in this case, you will end up with the same result with less pagerank, and longer URLs...
Just my 2 cents
Most likely is a Moz error. As far as I know, Roger (Moz crawler) automatically adjusts the speed depending on the site performance, I guess watching for the response times, as they crawled sites too fast, they could be either bring down your server or have their IPs blocked as a security measure. It does that automatically. However, sometimes our Websites can become slow and therefore instead of a 200 status code (success load) it gets a 404 as the server isn't responding.
Are you behind a shared server? Were you able to see Roger crawling your site? There are some free services that allows you to make some stress tests to see if your server is responding well, even under pressure.
This shouldn't have anything to do with the fact that the pages are not showing in Google, but, if you are having server issues with regular downtimes, Google may remove pages that they consider gone from your site. Google already said that a couple of 404 errors isn't enough to remove a page from their index, however, a 404 over 3 - 5 days might.
Have you verified your site on GWT? They do show crawling errors, and if they are also getting 404s they should appear there.
Hope that helps!
Just published an article on SEJ about Schema.org markup and its role in future google updates: http://www.searchenginejournal.com/schema-org-eight-tips-incorporate-rich-snippets-conform-google-hummingbird/75435/
As peter pointed out, it will depend on the CMS you are using, nowadays there are plugins for almost everything.
Didn't know about Market Wired.
PRWeb (Vocus) customer support SUCKS! So if they are similar services, definitely go with Market Wired...
G,
It wouldn't do any difference to serve the sitemap over HTTP or HTTPS. As for the http and https pages within the same sitemap, it isn't a problem either.
The only reason I can find for creating multiple sitemaps is for HTML pages, images or videos that do require separate sitemaps.
Does you site uses PHP? If yes, I suggest you test xml-sitemaps.com and it will create the full sitemap for you. If you have a dynamic site, then I suggest getting their commercial version. I've been using it for over 7 years I think and I always get a copy for each site I create. And they offer lots of extras in case you need them (news sitemaps, etc).
Hey G!
You can serve your sitemap in both versions, that won't be any problem and won't trigger the duplicate content issue. So you are safe both ways.
As for the second question: Yes, you should, unless you don't want your pages indexed (any HTTP or HTTPS). I think I saw your site before, and if I remember correctly you had your homepage and login script under SSL, right? Then you should definitely include your homepage in the sitemap but you can leave the login script file out as you don't need that indexed nor google will index it either.
Once you have your sitemap ready, consider including a path in the robots file, like this:
User-agent: *
Sitemap: http://[your website address here]/sitemap.xml
Hope that helps!
None of the ways you decide to display the images will affect Google AdSense, unless you serve a page with a hundred ads and the image there, or if you did some kind of trick to "force" users to click on the ads.
That said, you need to think for the users, put yourself in your visitors' mind. What would be easier? faster? prettier? If you ask me I would definitely go with the lightbox to display image galleries.
It won't hurt you in front of Google eyes either, they are smart enough to interpret lightboxes just fine.
Nope, it isn't too good to be true. Page caching is one of the smartest ways to increase speed.
How it works depends on the plugin, but usually it avoids making queries to DB, and processing by saving a copy of a page for a certain amount of time, so when the next person asks for it, the site will serve the saved version instead of generating it all over again.
EDIT: even on highly dynamic sites pages do not constantly change, for example, go to CNN now and go again in 1 hour, you will probably see the exact same thing but the latest news changed (most likely), but the rest of the page could be easily cached for 1 hour and avoid the re-generation so all those users accessing within that hour could be served with the cached copy instead of generating it thousands of more times (one for each visitor).
Have you tried looking at the headers response you get when accessing those pages? Try with http://web-sniffer.net/ to verify that your site is actually responding with a 301.
Anyway, I think Mozbot AKA Roger, scrapes sites once a week, so if you made the changes only 4 days ago, I would hold for a few more days.
How about using the robots file to disallow those pages? However, if you have a 301 and in the new page a noindex it should work (but who knows, they are still showing the old URLs after a year).
Did to try using the "Fetch as Googlebot"? in WMT? if not, go ahead and then hit the "submit to index" that usually speeds up big changes in sites.
Hope that helps.
If you are using the appropriate rel="alternate" hreflang="x" you shouldn't have any problem. For more info you can read here: https://support.google.com/webmasters/answer/189077?hl=en
Most likely, as the pages are the same, they won't show up at the same time, Google will serve the one that they consider appropriate for the user, hence the use of hreflang, to guide Google to know which version they should serve to each user.
Hey Christy,
I did follow those steps a week ago I think when I opened a question about the same issue, that didn't solved it. However, since Yesterday (I think it was my latest login), I wasn't requested to login again, so seems fixed in this area. OSE is still asking me to login every time, unlike before that ig I was logged in in Moz I was logged in in OSE too.
Thanks!
What do they have? 10,000 pages of uninteresting content? a robots tag noindex,follow will do to leave them our of engines. But to decide you really need to know what those pages have. 10,000 isn't a few, and if there's value content worth sharing, a page could get a link, that if you disallow it through the robots, won't even flow pagerank.
It all comes down to what are those pages for...?
Well, 20 are a lot, those sites are probably safe, but the main corporate site might be flagged as "selling link" although it isn't.
I just had my manual penalty revoked, and in the process they pointed a link from federicoeinhorn.com to my business site (the link was in the about page saying I was the CEO of *** and a link there). Google pointed that specific link as one of the examples of the infringing links, therefore I would suggest you do a nofollow to all those external links in the nav.
Hmm.. usually it is advised to nofollow those kind of links. However, as your main corporate site links always to the same places (how many external sites?) there shouldn't be any problem, still it wouldn't hurt to nofollow them to avoid any penalty.
Sorry, I do not provide SEO services/consulting. If you are looking for a SEO, you can search in the recommended companies section: http://moz.com/article/recommended
I only contribute here as a hobby and a way to learn more every day
Having a link in the image linking to it's own file does not help your image to get indexed faster or added to a sitemap, at least not that I know of.
In my blog, I don't have the images linking to their file and they are indexed just fine, plus added to the image sitemap that is being generated by "xml-sitemaps" automated script.
Having the image file redirecting to the page having the post is actually serving users with different content that what Google may see, hence the penalty of the image mismatch. If Google offers a link the the image, it should load the image. That's why they also offer link to the page or clicking the image links to the page where the image is. You can read more on the "Image Mismatch" penalty here: https://support.google.com/webmasters/answer/3394137
The "page" you see only the image is the image file itself, there's no page there, just the file.
Wordpress does that by default but you can simply change that default to other options they offer and it is "saved" as the default, like no link, link to another page, etc.
The only benefit of having the link to the image file is that usually images are scaled to fit into posts, and therefore someone may want to see the image in its full size, hence the link to the image file. There are also other ways to deal with that like lightboxes to display images.
You could redirect the image to the page where the image is, but that requires some coding (detecting from where your image is being requested, etc.). Doing that may also carry a penalty from Google (recently announced) called "Image mismatch".
There's no "best practice" here, the best is what you consider best for each image. Take the image scaling example I mentioned, say you post an infographic, perhaps the image is much larger than the size you have available, so it makes sense linking to the image file, so the user can see the infographic in its full size.
Hope that helps!
PRWeb is the best one i my opinion. But almost all Press Release sites have gone nofollow (within their site and distribution network), so you won't get any link juice from them, if that is what you were looking.
However, that doesn't mean you can't get some links organically by others sharing the news, etc.
Hope that helps.
OSE doesn't scrape the entire Web, only portion of it. It takes a lot of resources to scrape the entire Web for links, however, the main ones, those that aren't that "hidden" are easily found by OSE.
Hey Daniel,
The canonical should point on every possible filtering to the main page without any filter.
Check the following Q&A from Yesterday: http://moz.com/community/q/canonicalization-w-search-and-filter-operators-parameters
Hope that helps.
Hmmm.. not necessarily. You can use that domain and not being penalized by the EMD algo. You could be in a problem if you did all the link building with the same keyword, exposing the targeted keyword, but as long as you keep it clean, I guess you are safe.
The EMD update I think targets more to those using every possible EMD domain and redirecting it to their site (brandname-key1.com / brandname-key2.com / etc.).
Not necessarily, unless you already have something built and with enough authority on henryjewelers.com
apple computer - apple.com
abercrombie & fitch - abercrombie.com
Tiffany & Company - tiffany.com
etc.
Reports are exported as PDF, so you can't edit the text directly. However, there are some tools that allow you to edit PDF files as if they were DOC files, for example NitroPDF (it's not free, but is worth it. And they offer a free trial I think).