Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to fix Google index after fixing site infected with malware.
-
Hi All
Upgraded a Joomla site for a customer a couple of months ago that was infected with malware (it wasn't flagged as infected by google). Site is fine now but still noticing search queries for "cheap adobe" etc with links to http://domain.com/index.php?vc=201&Cheap_Adobe_Acrobat_xi in web master tools (about 50 in total). These url's redirect back to home page and seem to be remaining in the index (I think Joomla is doing this automatically)
Firstly, what sort of effect would these be having on on their rankings? Would they be seen by google as duplicate content for the homepage (moz doesn't report them as such as there are no internal links).
Secondly what's my best plan of attack to fix them. Should I setup 404's for them and then submit them to google? Will resubmitting the site to the index fix things?
Would appreciate any advice or suggestions on the ramifications of this and how I should fix it.
Regards, Ian
-
Thanks Tom
That's a good point. Part of my problem lies in the number of URL's with parameters (thousands). Applying status codes of any type isn't really viable.
Starting to see the url's clean up with the addition of the entries in robot.txt.
Regards
Ian
-
I would make them return a 410 not 404
410's are dead links if you use a 404 google will keep coming back to see if you fixed the 404
sending google to a 410 lets them know it's gone
http://moz.com/learn/seo/http-status-codes
all the best,
tom
-
OK Might have a solution that would at least work for my situation.
Since implementing SEF URL's on the site I have no real need for any URL's with parameters. By adding the following to robots.txt it should prevent any indexing of old pages or pages with parameters.
Disallow: /index.php?*
Tested it in webmaster tools with some of the offending URL's and it seems to work. I'll wait until the next indexing and post back or mark it as answered.
-
Thanks all for you help
A little more information and maybe a little more advice required.
Since fixing the malware http://domain.com/index.php?vc=201&Cheap_Adobe_Acrobat_xi and similar are actually no longer pages. Joomla actually sees anything after ? as a parameter and just ignores it because it no longer matches a page and hence the reason it just defaults to the home page http://domain.com/index.php. This is Joomla and probably most other content management systems default behavior. The problem here lies in the fact that google indexed that page when it was infected and it remains in the index because to google it sees a status code of 200 when re-indexing this page.
The problem is now a bit broader and has more ramifications than first thought. Any pages from the previous system that used parameters would receive a 200 status code and remain in the index. Checking url parameters in web master tools confirms this with various paramaters showing thousands of url's monitored. Keep in mind google is showing a message that there are no problems with parameters for this site.
So the advice I need now is related to url parameters in Web Master tools. The new site uses SEF URLS and so makes much less use of paramaters. How can I ensure that the old redundant pages with parameters are dropped from the index. This would involve thousands of 301's or 404's let alone trying to work them all out. There is a reset link for each parameter in webmaster tools but not much documentation as to what it does. If I reset all the parameters would that clean up the index?
I'd be interested in what others think about this issue because I feel that this might be a common problem with cms based platforms and after major changes, thousands of paramater based url's just defaulting to home and other pages probably affects the site and page ranking.
Ian
-
The search engines are retaining the indexing of the links because following them through the redirect returns a 200 server header - which to the SEs means all is well and there is a page there to index. As you note in other responses - the only way to change that is to force the server to return a 404 header as a signal to the SEs to eventually drop it.
Yes, you could use a robots.txt directive to block those specific URLs that are the target of the spam links, in order to satisfy the URL Removal Tool's requirement for allowing a removal request. That should work as a quicker solution than trying to make coding changes in Joomla (sorry, it's been about 3.5 yrs since I've done any Joomla work).
Good luck!
Paul
[EDIT: Gah...ignore the P.S. as I didn't notice you don't have an easy way to get redirects into the Zeus server before Joomla kicks in. Sorry]
P.S. A final quick option would be to write a redirect in htaccess to 301-redirect the fake URLs to a real 404 page. This would kick in before Joomla got a chance to interfere with its pseudo-redirect.
-
You're right, I guess I was focused on the index. Moz isn't showing any external links to these pages and neither is webmaster tools. My feeling is that google is retaining them for some reason, maybe just the keywords in the url?
-
I've checked the source of the visits and they are only coming form google searches for "cheap adobe" and the like. The original malware used the site to get these searches into the index and then direct them to other sites/pages.
Being a Zeus server it doesn't use htaccess, my task would be a lot simple if it did. It has an alternative rewrite file but documentation is scarce on using it for 404's.
I'll keep researching.
-
That means no body clicks on them, but how did google find them? This is not evidence there is no links, just that no one has visited your site thought them
-
Thanks Paul
I've checked analytics and the only source of these url's is google organic searches, not external sites. I think unfortunately my problem is the dynamic nature of Joomla and a combination of a number of factors that are causing it to do this in an SEO unfriendly way.
I think my biggest challenge is getting the URL's to 404 before I submit them to the web master removal tool (which my research tells me needs to be done before you submit). I think I read there might be a robots.txt option so I'll look into that.
Ian
-
These pages may have links from other spam sites, you don't want them to return a 200.
You want them to 404, in joomla you can make the site use htaccess or not, make sure it dose and 404 the pages there. -
Thanks Alan
This seems to be done by the combination of Joomla/Zeus and the redirection manager. No longer infected, the only visits are from organic searches from google and it's been a couple of months. Whatever the reason Joomla feels it shouldn't 404 these pages and just displays (not 301 redirects them) to the home page.
My feeling is that these URL's in the index and the visits from them probably aren't doing the site any good.
-
Thanks Dave
I think this might be a good option but I have a couple of problem with trying to achieve this. It's a joomla cms running on a zeus server with a Search Engine Friendly URL plugin running. I think that is possibly the worst combination of technologies for SEO in history. The combination of url rewrites in zeus and the redirection manager in Zeus just display the home page with the dodgey URL and give it a 200 status code. I think this is why google is taking so long to drop it from the index.
Ian
-
You absolutely do NOT want to redirect these links to the home page, Ian! These are spam links, coming from completely unrelated sites. They are Google's very definition of unnatural links and 301-redirecting them to your home page also redirects their potential damage to your home page.
You want them to return 404 status as quickly as possible. I'd also be tempted to use the Webmaster Tools remove tool to try to speed up the process, especially if these junk links currently form a large percentage of your overall link profile. (You'll need to find & remove the redirect that currently re-points them to the home page too, for the 404 header to do it's job of telling the search engines to drop the page from their indexes.)
As far as rankings issues, this isn't a potential dupe content issue, it's a damaging unnatural links issue, which is even more significant. These are the kinds of links that could lead to at least algorithmic penalty, or worst case, manual penalty. Either way, these penalties are vastly harder to fix after the fact than to avoid them in the first place.
In addition to the steps above designed to make it clear those links don't belong to your site, I'd keep a good record of the links, their originating domains, and when & how they were originally created due to the malware attack and your fix. That way you have essential documentation should you receive a penalty and need to submit a reinclusion request.
Hope that answers your questions?
Paul
-
why are they redirecting back to home page? do you redirect them or are you still infected?
I would make sure they 404
-
The easiest way would be a permanent re-direct on the offending URLs.
Check the incoming variable i.e. vc and permanently re-direct if it's an offending using 301.Google when seeing the 301 will drop the URL from the index.
There is a URL removal tool in Google Web Master Tools if the URL contains any personal information.
I had a similar issue a few days ago, the index is already starting to clear up, from a corrupt XML site map.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google not Indexing images on CDN.
My URL is: https://bit.ly/2hWAApQ We have set up a CDN on our own domain: https://bit.ly/2KspW3C We have a main xml sitemap: https://bit.ly/2rd2jEb and https://bit.ly/2JMu7GB is one the sub sitemaps with images listed within. The image sitemap uses the CDN URLs. We verified the CDN subdomain in GWT. The robots.txt does not restrict any of the photos: https://bit.ly/2FAWJjk. Yet, GWT still reports none of our images on the CDN are indexed. I ve followed all the steps and still none of the images are being indexed. My problem seems similar to this ticket https://bit.ly/2FzUnBl but however different because we don't have a separate image sitemap but instead have listed image urls within the sitemaps itself. Can anyone help please? I will promptly respond to any queries. Thanks
Technical SEO | | TNZ
Deepinder0 -
How long does Google takes to re-index title tags?
Hi, We have carried out changes in our website title tags. However, when I search for these pages on Google, I still see the old title tags in the search results. Is there any way to speed this process up? Thanks
Technical SEO | | Kilgray0 -
Redirecting HTTP to HTTPS - How long does it take Google to re-index the site?
hello Moz We know that this year, Moz changed its domain to moz.com from www.seomoz.org
Technical SEO | | joony
however, when you type "site:seomoz.org" you still can find old urls indexed on Google (on page 7 and above) We also changed our site from http://www.example.com to https://www.example.com
And Google is indexing both sites even though we did proper 301 redirection via htaccess. How long would it take Google to refresh the index? We just don't worry about it? Say we redirected our entire site. What is going to happen to those websites that copied and pasted our content? We have already DMCAed their webpages, but making our site https would mean that their website is now more original than our site? Thus, Google assumes that we have copied their site? (Google is very slow on responding to our DMCA complaint) Thank you in advance for your reply.0 -
Will Google Recrawl an Indexed URL Which is No Longer Internally Linked?
We accidentally introduced Google to our incomplete site. The end result: thousands of pages indexed which return nothing but a "Sorry, no results" page. I know there are many ways to go about this, but the sheer number of pages makes it frustrating. Ideally, in the interim, I'd love to 404 the offending pages and allow Google to recrawl them, realize they're dead, and begin removing them from the index. Unfortunately, we've removed the initial internal links that lead to this premature indexation from our site. So my question is, will Google revisit these pages based on their own records (as in, this page is indexed, let's go check it out again!), or will they only revisit them by following along a current site structure? We are signed up with WMT if that helps.
Technical SEO | | kirmeliux0 -
Why is my blog disappearing from Google index?
My Google blogger blog is about 10 months old. In that time i have worked really hard with adding unique content, building relationships with other bloggers in the same niche, and done some inbound marketing. 2 weeks ago I updated the template to something cleaner, with a little more "wordpress" feel to it. This means i've messed about with the code a lot in these weeks, adding social buttons etc. The problem is that from some point late last week thurs/fri my pages started disappearing from Googles index. I have checked webmaster tools and have no manual actions. My link profile is pretty clean as its a new site, and i have manually checked every piece of content published for plagiarism etc. So what is going on? Did i break my blog? Or is something else amiss? Impressions are down 96% comparing Nov 1-5th to previous 5 days. site is here: http://bit.ly/174beVm Thanks for any help in advance.
Technical SEO | | Silkstream0 -
Unnecessary pages getting indexed in Google for my blog
I have a blog dapazze.com and I am suffering from a problem for a long time. I found out that Google have indexed hundreds of replytocom links and images attachment pages for my blog. I had to remove these pages manually using the URL removal tool. I had used "Disallow: ?replytocom" in my robots.txt, but Google disobeyed it. After that, I removed the parameter from my blog completely using the SEO by Yoast plugin. But now I see that Google has again started indexing these links even after they are not present in my blog (I use #comment). Google have also indexed many of my admin and plugin pages, whereas they are disallowed in my robots.txt file. Have a look at my robots.txt file here: http://dapazze.com/robots.txt Please help me out to solve this problem permanently?
Technical SEO | | rahulchowdhury0 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0 -
Multiple Domains, Same IP address, redirecting to preferred domain (301) -site is still indexed under wrong domains
Due to acquisitions over time and the merging of many microsites into one major site, we currently have 20+ TLD's pointing to the same IP address as our "preferred domain:" for our consolidated website http://goo.gl/gH33w. They are all set up as 301 redirects on apache - including both the www and non www versions. When we launched this consolidated website, (April 2010) we accidentally left the settings of our site open to accept any of our domains on the same IP. This was later fixed but unfortunately Google indexed our site under multiple of these URL's (ignoring the redirects) using the same content from our main website but swapping out the domain. We added some additional redirects on apache to redirect these individual pages pages indexed under the wrong domain to the same page under our main domain http://goo.gl/gH33w. This seemed to help resolve the issue and moved hundreds of pages off the index. However, in December of 2010 we made significant changes in our external dns for our ip addresses and now since December, we see pages indexed under these redirecting domains on the rise again. If you do a search query of : site:laboratoryid.com you will see a few hundred examples of pages indexed under the wrong domain. When you click on the link, it does redirect to the same page but under the preferred domain. So the redirect is working and has been confirmed as 301. But for some reason Google continues to crawl our site and index under this incorrect domains. Why is this? Is there a setting we are missing? These domain level and page level redirects should be decreasing the pages being indexed under the wrong domain but it appears it is doing the reverse. All of these old domains currently point to our production IP address where are preferred domain is also pointing. Could this be the issue? None of the pages indexed today are from the old version of these sites. They only seem to be the new content from the new site but not under the preferred domain. Any insight would be much appreciated because we have tried many things without success to get this resolved.
Technical SEO | | sboelter0