Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Wrong URLs indexed, Failing To Rank Anywhere
-
I’m struggling with a client website that's massively failing to rank.
It was published in Nov/Dec last year - not optimised or ranking for anything, it's about 20 pages. I came onboard recently, and 5-6 weeks ago we added new content, did the on-page and finally changed from the non-www to the www version in htaccess and WP settings (while setting www as preferred in Search Console). We then did a press release and since then, have acquired about 4 partial match contextual links on good websites (before this, it had virtually none, save for social profiles etc.)
I should note that just before we added the (about 50%) new content and optimised, my developer accidentally published the dev site of the old version of the site and it got indexed. He immediately added it correctly to robots.txt, and I assumed it would therefore drop out of the index fairly quickly and we need not be concerned.
Now it's about 6 weeks later, and we’re still not ranking anywhere for our chosen keywords. The keywords are around “egg freezing,” so only moderate competition. We’re not even ranking for our brand name, which is 4 words long and pretty unique. We were ranking in the top 30 for this until yesterday, but it was the press release page on the old (non-www) URL!
I was convinced we must have a duplicate content issue after realising the dev site was still indexed, so last week, we went into Search Console to remove all of the dev URLs manually from the index. The next day, they were all removed, and we suddenly began ranking (~83) for “freezing your eggs,” one of our keywords! This seemed unlikely to be a coincidence, but once again, the positive sign was dampened by the fact it was non-www page that was ranking, which made me wonder why the non-www pages were still even indexed. When I do site:oursite.com, for example, both non-www and www URLs are still showing up….
Can someone with more experience than me tell me whether I need to give up on this site, or what I could do to find out if I do?
I feel like I may be wasting the client’s money here by building links to a site that could be under a very weird penalty
-
Thanks, we'll check all of the old URLs are redirecting correctly (though I'd assume given the htacces and WP settings changes, they would).
Will also perform the other check you mentioned and report back if anything is amiss... Thank you, Lynn.
-
It should sort itself out if the technical setup is ok, so yes keep doing what you are doing!
I would not use the removal request tool to try to get rid of the non-www, it is not really intended for this kind of usage and might bring unexpected results. Usually your 301s should bring about the desired effect faster than most other methods. You can use a tool like this one just to 100% confirm that the non-www is 301 redirecting to the www version on all pages (you probably already have but I mention it again to be sure).
Are the www urls in your sitemap showing all (or mostly) indexed in the search console? If yes then really you should be ok and it might just need a bit of patience.
-
Firstly, thank you both very much for your responses - they were both really helpful. It sounds, then, like the only solution is to keep waiting while continuing our link-buliding and hoping that might help (Lynn, sadly we have taken care of most of the technical suggestions you made).
Would it be worth also submitting removal requests via Search Console for the non-www URLs? I had assumed these would drop out quickly after setting the preferred domain, but that didn't happen, so perhaps forcing it like we did for the development URLs could do the trick?
-
Hi,
As Chris mentions it sounds like you have done the basics and you might just need to be a bit patient. Especially with only a few incoming links it might take google a little while to fully crawl and index the site and any changes.
It is certainly worth double checking the main technical bits:
1. The dev site is fully removed from the index (slightly different ways to remove complete sub domains vs sub folders but in my experience removal via the search console is usually pretty quick. After that make sure the dev site is permanently removed from the current location and returns a 404 or that it is password protected).
2. Double check the www vs non www 301 logic and make sure it is all working as expected.
3. Submit a sitemap with the latest urls and confirm indexing of the pages in the search console (important in order to quickly identify any hidden indexing issues)
Then it is a case of waiting for google to incorporate all the updates into the index. A mixture of www and non www for a period is not unusual in such situations. As long as the 301s are working correctly the www versions should eventually be the only ones you see.
Perhaps important to note that this does not sound like a 'penalty' as such but a technical issue, so it needs a technical fix in the first instance and should not hold you back in the medium - long term as a penalty might. That being said, if your keywords are based on egg freezing of the human variety (ie IVF services etc) then I think that is a pretty competitive area usually, often with a lot of high authority information type domains floating around in the mix in addition to the commercial. So, if the technical stuff is all good then I would start looking at competition/content again - maybe your keywords are more competitive than you think (just a thought!).
-
We've experienced almost exactly the same process in the past when a dev accidentally left staging.domain.com open for indexation... the really bad news is that despite noticing this, blocking via Robots and going through the same process to remove the wrong ones via Search Console etc, getting the correct domain ranking in the top 50 positions took almost 6 infuriating months!
Just like you, we saw the non-www version and the staging.domain version of the pages indexed for a couple of months after we fixed everything up then all of a sudden one day the two wrong versions of the site disappeared from the index and the correct one started grabbing some traction.
All this to say that to my knowledge, there are no active tasks you can really perform beyond what you've already done to speed this process up. Maybe building a good volume of strong links will push a positive signal that the correct one should be recrawled. We did spend a considerable amount of time looking into it and the answer kept coming back the same - "it just takes time for Google to recrawl the three versions of the site and figure it out".
This is essentially educated speculation but I believe the reason this happens is because for whatever reason the wrong versions were crawled first at different points to be the original version so the correct one was seen as 100% duplicate and ignored. This would explain why you're seeing what you are and also why in a magical 24hr window that could come at any point, everything sorted itself out - it seems that the "original" versions of the domain no longer exist so the truly correct one is now unique.
If my understanding of all this is correct, it would also mean that moving your site to yet another domain wouldn't help either since according to Google's cache/index, the wrong versions of your current domain are still live and the "original" so putting that same site/content on a different domain would just be yet another version of the same site.
Apologies for not being able to offer actionable tasks or good news but I'm all ears for future reference if anyone else has a solution!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My url disappeared from Google but Search Console shows indexed. This url has been indexed for more than a year. Please help!
Super weird problem that I can't solve for last 5 hours. One of my urls: https://www.dcacar.com/lax-car-service.html Has been indexed for more than a year and also has an AMP version, few hours ago I realized that it had disappeared from serps. We were ranking on page 1 for several key terms. When I perform a search "site:dcacar.com " the url is no where to be found on all 5 pages. But when I check my Google Console it shows as indexed I requested to index again but nothing changed. All other 50 or so urls are not effected at all, this is the only url that has gone missing can someone solve this mystery for me please. Thanks a lot in advance.
Intermediate & Advanced SEO | | Davit19850 -
URL in russian
Hi everyone, I am doing an audit of a site that currently have a lot of 500 errors due to the russian langage. Basically, all the url's look that way for every page in russian: http://www.exemple.com/ru-kg/pешения-для/food-packaging-machines/
Intermediate & Advanced SEO | | alexrbrg
http://www.exemple.com/ru-kg/pешения-для/wood-flour-solutions/
http://www.exemple.com/ru-kg/pешения-для/cellulose-solutions/ I am wondering if this error is really caused by the server or if Google have difficulty reading the russian langage in URL's. Is it better to have the URL's only in english ?0 -
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
How can I make a list of all URLs indexed by Google?
I started working for this eCommerce site 2 months ago, and my SEO site audit revealed a massive spider trap. The site should have been 3500-ish pages, but Google has over 30K pages in its index. I'm trying to find a effective way of making a list of all URLs indexed by Google. Anyone? (I basically want to build a sitemap with all the indexed spider trap URLs, then set up 301 on those, then ping Google with the "defective" sitemap so they can see what the site really looks like and remove those URLs, shrinking the site back to around 3500 pages)
Intermediate & Advanced SEO | | Bryggselv.no0 -
What is best practice for "Sorting" URLs to prevent indexing and for best link juice ?
We are now introducing 5 links in all our category pages for different sorting options of category listings.
Intermediate & Advanced SEO | | lcourse
The site has about 100.000 pages and with this change the number of URLs may go up to over 350.000 pages.
Until now google is indexing well our site but I would like to prevent the "sorting URLS" leading to less complete crawling of our core pages, especially since we are planning further huge expansion of pages soon. Apart from blocking the paramter in the search console (which did not really work well for me in the past to prevent indexing) what do you suggest to minimize indexing of these URLs also taking into consideration link juice optimization? On a technical level the sorting is implemented in a way that the whole page is reloaded, for which may be better options as well.0 -
Does Google Read URL's if they include a # tag? Re: SEO Value of Clean Url's
An ECWID rep stated in regards to an inquiry about how the ECWID url's are not customizable, that "an important thing is that it doesn't matter what these URLs look like, because search engines don't read anything after that # in URLs. " Example http://www.runningboards4less.com/general-motors#!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 Basically all of this: #!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 That is a snippet out of a conversation where ECWID said that dirty urls don't matter beyond a hashtag... Is that true? I haven't found any rule that Google or other search engines (Google is really the most important) don't index, read, or place value on the part of the url after a # tag.
Intermediate & Advanced SEO | | Atlanta-SMO0 -
Strange URLs, how do I fix this?
I've just check Majestic and have seen around 50 links coming from one of my other sites. The links all look like this: http://www.dwww.mysite.com
Intermediate & Advanced SEO | | JohnPeters
http://www.eee.mysite.com
http://www.w.mysite.com The site these links are coming from is a html site. Any ideas whats going on or a way to get rid of these urls? When I visit the strange URLs such as http://www.dwww.mysite.com, it shows the home page of http://www.mysite.com. Is there a way to redirect anything like this back to the home page?0 -
Does Google index url with hashtags?
We are setting up some Jquery tabs in a page that will produce the same url with hashtags. For example: index.php#aboutus, index.php#ourguarantee, etc. We don't want that content to be crawled as we'd like to prevent duplicate content. Does Google normally crawl such urls or does it just ignore them? Thanks in advance.
Intermediate & Advanced SEO | | seoppc20120