Wrong URLs indexed, Failing To Rank Anywhere
-
I’m struggling with a client website that's massively failing to rank.
It was published in Nov/Dec last year - not optimised or ranking for anything, it's about 20 pages. I came onboard recently, and 5-6 weeks ago we added new content, did the on-page and finally changed from the non-www to the www version in htaccess and WP settings (while setting www as preferred in Search Console). We then did a press release and since then, have acquired about 4 partial match contextual links on good websites (before this, it had virtually none, save for social profiles etc.)
I should note that just before we added the (about 50%) new content and optimised, my developer accidentally published the dev site of the old version of the site and it got indexed. He immediately added it correctly to robots.txt, and I assumed it would therefore drop out of the index fairly quickly and we need not be concerned.
Now it's about 6 weeks later, and we’re still not ranking anywhere for our chosen keywords. The keywords are around “egg freezing,” so only moderate competition. We’re not even ranking for our brand name, which is 4 words long and pretty unique. We were ranking in the top 30 for this until yesterday, but it was the press release page on the old (non-www) URL!
I was convinced we must have a duplicate content issue after realising the dev site was still indexed, so last week, we went into Search Console to remove all of the dev URLs manually from the index. The next day, they were all removed, and we suddenly began ranking (~83) for “freezing your eggs,” one of our keywords! This seemed unlikely to be a coincidence, but once again, the positive sign was dampened by the fact it was non-www page that was ranking, which made me wonder why the non-www pages were still even indexed. When I do site:oursite.com, for example, both non-www and www URLs are still showing up….
Can someone with more experience than me tell me whether I need to give up on this site, or what I could do to find out if I do?
I feel like I may be wasting the client’s money here by building links to a site that could be under a very weird penalty
-
Thanks, we'll check all of the old URLs are redirecting correctly (though I'd assume given the htacces and WP settings changes, they would).
Will also perform the other check you mentioned and report back if anything is amiss... Thank you, Lynn.
-
It should sort itself out if the technical setup is ok, so yes keep doing what you are doing!
I would not use the removal request tool to try to get rid of the non-www, it is not really intended for this kind of usage and might bring unexpected results. Usually your 301s should bring about the desired effect faster than most other methods. You can use a tool like this one just to 100% confirm that the non-www is 301 redirecting to the www version on all pages (you probably already have but I mention it again to be sure).
Are the www urls in your sitemap showing all (or mostly) indexed in the search console? If yes then really you should be ok and it might just need a bit of patience.
-
Firstly, thank you both very much for your responses - they were both really helpful. It sounds, then, like the only solution is to keep waiting while continuing our link-buliding and hoping that might help (Lynn, sadly we have taken care of most of the technical suggestions you made).
Would it be worth also submitting removal requests via Search Console for the non-www URLs? I had assumed these would drop out quickly after setting the preferred domain, but that didn't happen, so perhaps forcing it like we did for the development URLs could do the trick?
-
Hi,
As Chris mentions it sounds like you have done the basics and you might just need to be a bit patient. Especially with only a few incoming links it might take google a little while to fully crawl and index the site and any changes.
It is certainly worth double checking the main technical bits:
1. The dev site is fully removed from the index (slightly different ways to remove complete sub domains vs sub folders but in my experience removal via the search console is usually pretty quick. After that make sure the dev site is permanently removed from the current location and returns a 404 or that it is password protected).
2. Double check the www vs non www 301 logic and make sure it is all working as expected.
3. Submit a sitemap with the latest urls and confirm indexing of the pages in the search console (important in order to quickly identify any hidden indexing issues)
Then it is a case of waiting for google to incorporate all the updates into the index. A mixture of www and non www for a period is not unusual in such situations. As long as the 301s are working correctly the www versions should eventually be the only ones you see.
Perhaps important to note that this does not sound like a 'penalty' as such but a technical issue, so it needs a technical fix in the first instance and should not hold you back in the medium - long term as a penalty might. That being said, if your keywords are based on egg freezing of the human variety (ie IVF services etc) then I think that is a pretty competitive area usually, often with a lot of high authority information type domains floating around in the mix in addition to the commercial. So, if the technical stuff is all good then I would start looking at competition/content again - maybe your keywords are more competitive than you think (just a thought!).
-
We've experienced almost exactly the same process in the past when a dev accidentally left staging.domain.com open for indexation... the really bad news is that despite noticing this, blocking via Robots and going through the same process to remove the wrong ones via Search Console etc, getting the correct domain ranking in the top 50 positions took almost 6 infuriating months!
Just like you, we saw the non-www version and the staging.domain version of the pages indexed for a couple of months after we fixed everything up then all of a sudden one day the two wrong versions of the site disappeared from the index and the correct one started grabbing some traction.
All this to say that to my knowledge, there are no active tasks you can really perform beyond what you've already done to speed this process up. Maybe building a good volume of strong links will push a positive signal that the correct one should be recrawled. We did spend a considerable amount of time looking into it and the answer kept coming back the same - "it just takes time for Google to recrawl the three versions of the site and figure it out".
This is essentially educated speculation but I believe the reason this happens is because for whatever reason the wrong versions were crawled first at different points to be the original version so the correct one was seen as 100% duplicate and ignored. This would explain why you're seeing what you are and also why in a magical 24hr window that could come at any point, everything sorted itself out - it seems that the "original" versions of the domain no longer exist so the truly correct one is now unique.
If my understanding of all this is correct, it would also mean that moving your site to yet another domain wouldn't help either since according to Google's cache/index, the wrong versions of your current domain are still live and the "original" so putting that same site/content on a different domain would just be yet another version of the same site.
Apologies for not being able to offer actionable tasks or good news but I'm all ears for future reference if anyone else has a solution!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Keywords in URL
I have an ecommerce store and i am using moz to get it into the best seo situation... my question is this..... I want to know how important it is to have the targeted keyword actually in the product page url.... I working on meta title and description which is good, but if i start changing all my product urls, it has major impact on the work i have to do since i would have to redo all my product links in ads, and all my product urls in emails, etc. So how much of a part do the urls play in seo?
Intermediate & Advanced SEO | | Bkhoward20010 -
Finding Ranking for search term and increasing ranking
Hi. The company that I'm working with would like to rank highly in google for certain generic search terms (dentist, dentists, etc.). Certain websites the company has used to rank highly in google for generic keywords, but has not for years now since google has revised their algorithm so many times. Moz lists that the company websites are not found in the top 51+ results in google. My first question is: **Is there a way, apart from manually searching the results, to find the ranking position of the website in google? **Ideally, I would like to find a program that will do this. Second, I've been reading a lot of the great articles and comments on Moz, and I've been learning a lot more about SEO. My focus has shifted to spending more attention on User Experience and Social Media instead of placing the exact keywords in the pages / tags of the website. What area(s) should I be focusing on to best increase the ranking of the company website for certain generic terms? Ideally, I'd like to create good quality content, so that users will not instantly click away. I appreciate any thoughts or comments. Thank you in advance!
Intermediate & Advanced SEO | | americasmiles0 -
URL Injection Hack - What to do with spammy URLs that keep appearing in Google's index?
A website was hacked (URL injection) but the malicious code has been cleaned up and removed from all pages. However, whenever we run a site:domain.com in Google, we keep finding more spammy URLs from the hack. They all lead to a 404 error page since the hack was cleaned up in the code. We have been using the Google WMT Remove URLs tool to have these spammy URLs removed from Google's index but new URLs keep appearing every day. We looked at the cache dates on these URLs and they are vary in dates but none are recent and most are from a month ago when the initial hack occurred. My question is...should we continue to check the index every day and keep submitting these URLs to be removed manually? Or since they all lead to a 404 page will Google eventually remove these spammy URLs from the index automatically? Thanks in advance Moz community for your feedback.
Intermediate & Advanced SEO | | peteboyd0 -
Url structure of a blog
We are trying to work out what the best structure for our blog is as we want each page to rank as highly as possible, we were looking at a flat structure similar to http://www.hongkiat.com/blog/ where every posts is after the blog/ but not in category's although the viewers can look in different category's from the top buttons on the page- photoshop - icons etc or we where going to go for the structured way- blog/photoshop/blog-post.html the only problem is that we will end up 4 deep at least with this and at least 80 characters in the url. any help would be appreciated. Thanks Shaun
Intermediate & Advanced SEO | | BobAnderson0 -
Google is ranking the wrong page for the targeted keyword
I have two examples below where we want it to rank for the targeted page but google picked another page to rank instead. This is happening a lot on this site I just recently started to work on. Example 1 Googles Choice for key word Motorcycle Tires: http://www.rockymountainatvmc.com/cl/50/Tires-and-Wheels What we want Google to choice for Motorcycle Tires: http://www.rockymountainatvmc.com/c/49/-/181/Motorcycle-Tires Other pages about Motorcycle tires: http://www.rockymountainatvmc.com/d/12/Motorcycle-Tires We even used the rel="canonical" for this url to point to our target page. http://www.rockymountainatvmc.com/c/50/-/181/Motorcycle-Tires Example 2 ATV Tires We want this page to rank http://www.rockymountainatvmc.com/c/43/81/165/ATV-Tires however google has decided to rank http://www.rockymountainatvmc.com/t/43/81/165/723/ATV-Tires-All that is acutally one folder under where we want it.
Intermediate & Advanced SEO | | DoRM0 -
Rewriting URL
I'm doing a major URL rewriting on our site to make the URL more SEO friendly as well as more comfortable and intuitive for our users. Our site has a lot of indexed pages, over 250k. So it will take Google a while to reindex everything. I was thinking that when Google Bot encounters the new URLs, it will probably figure out it's duplicate content with the old URL. At least until it recrawls the old URL and get a 301 directing them to the new URL. This will probably lower the ranking of every page being crawled. Am I right to assume this is what will happen? Or is it fine as long as the old URLs get 301 redirect? If it is indeed a problem, what's the best solution? rel="canonical" on every single page maybe? Another approach? Thank you.
Intermediate & Advanced SEO | | corwin0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180 -
Why did my rankings drop?
Hi all, In July I started to re-energise my link building efforts by getting a proper campaign together to build links. Despite building about 20 new links my traffic has actually fallen. Here a a breakdown of what happen: 1)Late June I noticed my toolbar page rank up at about PR4 which, despite only being a small part of the algo, was nice to see. Early July I started my link building campaign by getting together a massive list of potential link partners by using Open Site Explorer on my competitors sites. 3)Because I'm a bit pressed for time I decided to go for the easier links first. I sorted my link list by Domain Authority and started to list on high DA directories used by my competitors. I listed on about 20 of these directories. I also livened up an old links page I'd previously hidden from the SE's because I was planning to do a bit of Link exchanging too. A few days after I started building links from these directories I noticed my traffic start to drop off gradually. I also noticed the toolbar PR go down to PR3. I decided to stop at 20 submissions because it looked like this was effecting traffic. I also removed the links page I'd livened up which produced a temporary improvement in traffic but it's since gone on to get a bit worse. Traffic is now down by about 10% on when I started buying submissions to directories. I must add that during this period we have also been taking on new clients which, as a a real estate listing site, means we put loads of content on our site for the client. That content is also on the clients website and on other competitors sites. So there would be lot's of a content that appears elsewhere on the net. Not really sure which of the two has caused the problem and not really sure how to progress. Do I remove the links on the directories? Do I wait for this newly added content to bed down so that new fresh can take it's place in our results which we rank from? Any help would be appreciated.
Intermediate & Advanced SEO | | Mulith0