Google is indexing bad URLS
-
Hi All,
The site I am working on is built on Wordpress. The plugin Revolution Slider was downloaded. While no longer utilized, it still remained on the site for some time. This plugin began creating hundreds of URLs containing nothing but code on the page. I noticed these URLs were being indexed by Google. The URLs follow the structure: www.mysite.com/wp-content/uploads/revslider/templates/this-part-changes/
I have done the following to prevent these URLs from being created & indexed:
1. Added a directive in my Htaccess to 404 all of these URLs
2. Blocked /wp-content/uploads/revslider/ in my robots.txt
3. Manually de-inedex each URL using the GSC tool
4. Deleted the plugin
However, new URLs still appear in Google's index, despite being blocked by robots.txt and resolving to a 404. Can anyone suggest any next steps? I
Thanks!
-
All of the plugins I can find allow the tag to be deployed on pages, posts etc. You pick from a pre-defined list of existing content, instead of just whacking in a URL and having it inserted (annoying!)
If you put an index.php at that location (the location of the 404), you could put whatever you wanted in it. Might work (maybe test with one). Would resolve a 200 so you'd then need to force a 410 over the top. Not very scalable though...
-
I do agree, I may have to pass this off to someone with more backend experience than myself. In terms of plugins, are you aware of any that will allow you to add noindex tags to an entire folder?
Thanks!
-
Hmm, that's interesting - it should work just as you say! This is the point where you need a developer's help rather than an SEO analysts :') sorry!
Google will revisit 410s if it believes there is a legitimate reason to do so, but it's much less likely to revisit them than it is with 404s (which actively tell Google that the content will return).
Plugins are your friends. Too many will overload a site and make it run pretty slowly (especially as PHP has no multi-threading support!) - but this plugin, you would only need it temporarily anyway.
You might have to start using something like PHPMyAdmin to browse your SQL databases. It's possible that the uninstall didn't work properly and there are still databases at work, generating fresh URLs. You can quash them at the database level if required, however I'd say go to a web developer as manual DB edits can be pretty hazardous to a non-expert
-
Thank you for all your help. I added in a directive to 410 the pages in my htaccess as so: Redirect 410 /revslider*/. However, it does not seem to work.
Currently, I am using Options All -Indexes to 404 the URLs. Although I still remain worried as even though Google would not revisit a 410, could it still initially index it? This seems to be the case with my 404 pages - Google is actively indexing the new 404 pages that the broken plugin is producing.
As I can not seem to locate the directory in Cpanel, adding a noindex to them has been tough. I will look for a plugin that can dynamically add it based on folder structure because the URLs are still actively being created.
The ongoing creation of the URL's is the ultimate source of the issue, I expected that deleting the plugin would have resolved it but that does not seem to be the case.
-
Just remember, the only regex character which is supported is "*". Others like "" and "?" are not supported! So it's still very limited. Changing the response from 404 to 410 should really help, but be prepared to give Google a week or two to digest your changes
Yes, it would be tricky to inject those URLs with Meta no index tags, but it wouldn't be impossible. You could create an index.php file at the directory of each page which contained a Meta no-index directive, or use a plugin to inject the tag onto specific URLs. There will be ways, don't give up too early! That being said, this part probably won't add much more than the 410s will
It wouldn't be a bad idea to inject the no-index tags, but do it for 410s and not for 404s (doing it for 404s could cause you BIG problems further down the line). Remember, 404 - "temporarily gone but will come back", 410 - "gone - never coming back". Really all 410s should be served with no-index tags. Google can read dynamically generated content, but is less likely to do so and crawls it less often. Still - it would at least make the problem begin shrinking over time. It would be better to get the tags into to non-modified source code (server side rendering)
By the way, you can send a no-index directive in the HTTP header if you are really stuck!
https://sitebulb.com/hints/indexability/robots-hints/noindex-in-html-and-http-header/
The above post is quite helpful, it shows no-index directives in HTML but also in the HTTP header
In contrast to that example, you'd be serving 410 (gone) not 200 (ok)
-
Thank you for your response! I will certainly use the regex in my robots.txt and try to change my Htaccess directive to 410 the pages.
However, the issue is that a defunct plugin is randomly creating hundreds of these URL's without my knowledge, which I can not seem to access. As this is the case, I can't add a no-index tag to them.
This is why I manually de-indexed each page using the GSC removal tool and then blocked them in my robots.txt. My hope was that after doing so, Google would no longer be able to find the bad URL's.
Despite this, Google is still actively crawling & indexing new URL's following this path, even though they are blocked by my robots.txt (validated). I am unsure how these URL's even continue to be created as I deleted the plugin.
I had the idea to try to write a program with javascript that would take the status code and insert a no-index tag if the header returned a 404, but I don't believe this would even be recognized by Google, as it would be inserted dynamically. Ultimately, I would like to find a way to get the plugin to stop creating these URL's, this way I can simply manually de-index them again.
Thanks,
-
You have taken some good measures there, but it does take Google time to revisit URLs and re-index them (or remove them from the index!)
Did you know, 404 just means a URL was temporarily removed and will be coming back? The status code you are looking to serve is 410 (gone) which is a harder signal
Robots.txt (for Google) does in-fact support wild cards. It's not full regex, in-fact the only wildcard supported is "*" (asterisk: matching any character or string of characters). You could supplement with a rule like this:
User-agent: * Disallow: /*revslider* That should, theoretically block any URL from indexation if it contains the string "revslider" Be sure to **validate** any new robots.txt rules using Google Search Console to check they are working right! Remember that robots.txt affects crawling and **not indexation!** To give Google a directive not to index a URL, you should use the Meta no-index tag: [https://support.google.com/webmasters/answer/93710?hl=en](https://support.google.com/webmasters/answer/93710?hl=en) **The steps are:**
- Remove your existing robots.txt rule (which would stop Google crawling the URL and thus stop them seeing a Meta no-index tag or any change in status code)
- Apply status 410 to those pages instead of 404
- Apply Meta no-index tags to the 410'ing URLs
- Wait for Google to digest and remove the pages from its index
- Put your robots.txt rule back to prevent it ever happening again
- Supplement with an additional wildcard rule
- Done!
- Hope that helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site's meta description is not being shown in Google Search results. Instead our privacy policy is getting indexed.
We re-launched our new site and put in the re-directs. Our site is https://www.fico.com/en. When I search for "fico" in Google. I see the privacy policy getting indexed as meta descriptions instead of our actual meta description. I have edited the meta description, requested Google to re-index our site. Not sure what to do next? Thanks for your advise.
Technical SEO | | gosheen0 -
Google indexing staging / development site that is redirected...
Hi Moz Fans! - Please help. We had a acme.stagingdomain.com while a site was in development, when it went live it redirected (302) to acmeprofessionalservices.com (real names redacted!!) no known external links to staging site although staging site url has been emailed from Google Apps(!!!) now found that staging site is in the index even though it redirects to the proper public site. and some (but not all) of the pages are in the index too. They all redirect to the proper public site when visited. It is convenient to have a redirect from the staging site to the new one for the team, Chrome etc. remember frequently visited sites. Be a shame to lose that. Yes, these pages can be removed using webmaster tools.
Technical SEO | | mozroadjan
But how did they get in the index to start with? And if we're building a new site, and a customer has an existing site is there a danger of duplicate content etc. penalties caused by the staging site? We had a similar incident recently when a PDF that was not linked anywhere on the site appeared in the index. The link had been emailed through Google Apps, and visited in Chrome, but that was it. So 3 questions. Why is the staging site still in the index despite the redirects? How did they get in the index in the first place? Will the new staging site affect the rank of the existing site, eg. duplicate content penalties?0 -
Why Google ranks a page with Meta Robots: NO INDEX, NO FOLLOW?
Hi guys, I was playing with the new OSE when I found out a weird thing: if you Google "performing arts school london" you will see w w w . mountview . org. uk at the 3rd position. The point is that page has "Meta Robots: NO INDEX, NO FOLLOW", why Google indexed it? Here you can see the robots.txt allows Google to index the URL but not the content, in article they also say the meta robots tag will properly avoid Google from indexing the URL either. Apparently, in my case that page is the only one has the tag "NO INDEX, NO FOLLOW", but it's the home page. so I said to myself: OK, perhaps they have just changed that tag therefore Google needs time to re-crawl that page and de-index following the no index tag. How long do you think it will take to don't see that page indexed? Do you think it will effect the whole website, as I suppose if you have that tag on your home page (the root domain) you will lose a lot of links' juice - it's totally unnatural a backlinks profile without links to a root domain? Cheers, Pierpaolo
Technical SEO | | madcow780 -
Https indexed...how?
Hello Moz, Since a while i am struggling with a SEO case: At the moment a https version of a homepage of a client of us is indexed in Google. Thats really strange because the url is redirected to an other website url for three weeks now. And we did everything to make clear to google that he has to index the other url.
Technical SEO | | Searchresult
So we have a few homepage urls A https://www.website.nl
B https://www.websites.nl/category
C http://www.websites.nl/category What we did: Redirected A with a 301 to B, a redirect from A or B to C is difficult because of the security issue with the ssl certificate. We put the right canonical url (VERSION C) on every version of the homepage(A,B) We only put the canonical urls in the sitemap.xml, only version C and uploaded it to Google Webmastertools We changed all important internal links to Version C We also get some valuable external backlinks to Version C Is there something i missed or i forget to say to Google hey look you've got the wrong url indexed, you have to index version C? How is it possible Google still prefers Version A after doing al those changes three weeks a go? I'am really looking forward to your answer. Thanks a lot in advanced! Greetz Djacko0 -
:8088 showing up on end of URL in natural Google search results
Hello All, Wondering if anyone has seen this before and might know what it is and how to get rid of it. As you can see on the attached image, when we search one of our popular keywords on google.com.au (doesn't happen on google.com btw) it has the following added on to the URL :8088 The link works fine, but it looks like an error message to anyone searching for us. The text for the listing comes from the home page meta info in the back-end of our site (Magento) but there isn't anything that looks out of place? Any ideas appreciated! Brian@CostumeBox.com.au 8088.JPG
Technical SEO | | costumebox0 -
Google Indexed Only 1 Page
Hi, I'm new and hope this forum can help me. I have recently resubmit my sitemap and Google only Indexed 1 Page. I can still see many of my old indexed pages in the SERP's? I have upgraded my template and graded all my pages to A's on SEOmoz, I have solid backlinks and have been building them over time. I have redirected all my 404 errors in .htaccess and removed /index.php from my url's. I have never done this before but my website runs perfect and all my pages redirect as I hoped. My site: www.FunerallCoverFinder.co.za How do I figure out what the problem is? Thanks in Advance!
Technical SEO | | Klement690 -
Non existant URLs being generated in index
Hi all, I have a pretty big problem with my site at the moment which I'm worried will have an impact on my rankings. I've just had a crawl test done and for some reason I get a load of urls returned that don't actually exist... For example I am getting urls like this in my crawl test and xml sitemap: www.applicablejobs.com/jobs/add/android-designer/android-designer/android-designer/android-developer/android-developer/ www.applicablejobs.com/jobs/add/android-designer/android-designer/android-designer/android-developer/iphone-designer/ All the urls seem to start off with www.applicablejobs.com/jobs/ and there is an entry for every conceivable combination of slugs. I can only assume that if the crawl test and an xml sitemap generator is indexing these urls then Google and other search engines probably are too. Does anyone have any idea what might be causing this issue and what can I do to remove them from Googles index if they are? Thanks
Technical SEO | | Benji870 -
Slash at end of URL causing Google crawler problems
Hello, We are having some problems with a few of our pages being crawled by Google and it looks like the slash at the end of the URL is causing the problem. Would appreciate any pointers on this. We have a redirect in place that redirects the "no slash" URL to the "slash" URL for all pages. The obvious solution would be to try turning this off, however, we're unable to figure our where this redirect is coming from. There doesn't appear to be an instruction in our .htaccess file doing this, and we've also tried using "DirectorySlash Off" in the .htaccess file, but that doesn't work either. (if it makes a difference it is a 302 redirect doing this, not a 301) If we can't get the above to work, then the other solution would be to somehow reconfigure the page so that it is recognizable with the slash at the end by Google. However, we're not sure how this would be done. I think the quickest solution would be to turn off the "add slash" redirect. Any ideas on where this command might be hiding, and how to turn it off would be greatly appreciated. Or any tips from people who have had similar crawl problems with google and any workarounds would be great! Thanks!
Technical SEO | | onetwentysix0