Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to fix Google index after fixing site infected with malware.
-
Hi All
Upgraded a Joomla site for a customer a couple of months ago that was infected with malware (it wasn't flagged as infected by google). Site is fine now but still noticing search queries for "cheap adobe" etc with links to http://domain.com/index.php?vc=201&Cheap_Adobe_Acrobat_xi in web master tools (about 50 in total). These url's redirect back to home page and seem to be remaining in the index (I think Joomla is doing this automatically)
Firstly, what sort of effect would these be having on on their rankings? Would they be seen by google as duplicate content for the homepage (moz doesn't report them as such as there are no internal links).
Secondly what's my best plan of attack to fix them. Should I setup 404's for them and then submit them to google? Will resubmitting the site to the index fix things?
Would appreciate any advice or suggestions on the ramifications of this and how I should fix it.
Regards, Ian
-
Thanks Tom
That's a good point. Part of my problem lies in the number of URL's with parameters (thousands). Applying status codes of any type isn't really viable.
Starting to see the url's clean up with the addition of the entries in robot.txt.
Regards
Ian
-
I would make them return a 410 not 404
410's are dead links if you use a 404 google will keep coming back to see if you fixed the 404
sending google to a 410 lets them know it's gone
http://moz.com/learn/seo/http-status-codes
all the best,
tom
-
OK Might have a solution that would at least work for my situation.
Since implementing SEF URL's on the site I have no real need for any URL's with parameters. By adding the following to robots.txt it should prevent any indexing of old pages or pages with parameters.
Disallow: /index.php?*
Tested it in webmaster tools with some of the offending URL's and it seems to work. I'll wait until the next indexing and post back or mark it as answered.
-
Thanks all for you help
A little more information and maybe a little more advice required.
Since fixing the malware http://domain.com/index.php?vc=201&Cheap_Adobe_Acrobat_xi and similar are actually no longer pages. Joomla actually sees anything after ? as a parameter and just ignores it because it no longer matches a page and hence the reason it just defaults to the home page http://domain.com/index.php. This is Joomla and probably most other content management systems default behavior. The problem here lies in the fact that google indexed that page when it was infected and it remains in the index because to google it sees a status code of 200 when re-indexing this page.
The problem is now a bit broader and has more ramifications than first thought. Any pages from the previous system that used parameters would receive a 200 status code and remain in the index. Checking url parameters in web master tools confirms this with various paramaters showing thousands of url's monitored. Keep in mind google is showing a message that there are no problems with parameters for this site.
So the advice I need now is related to url parameters in Web Master tools. The new site uses SEF URLS and so makes much less use of paramaters. How can I ensure that the old redundant pages with parameters are dropped from the index. This would involve thousands of 301's or 404's let alone trying to work them all out. There is a reset link for each parameter in webmaster tools but not much documentation as to what it does. If I reset all the parameters would that clean up the index?
I'd be interested in what others think about this issue because I feel that this might be a common problem with cms based platforms and after major changes, thousands of paramater based url's just defaulting to home and other pages probably affects the site and page ranking.
Ian
-
The search engines are retaining the indexing of the links because following them through the redirect returns a 200 server header - which to the SEs means all is well and there is a page there to index. As you note in other responses - the only way to change that is to force the server to return a 404 header as a signal to the SEs to eventually drop it.
Yes, you could use a robots.txt directive to block those specific URLs that are the target of the spam links, in order to satisfy the URL Removal Tool's requirement for allowing a removal request. That should work as a quicker solution than trying to make coding changes in Joomla (sorry, it's been about 3.5 yrs since I've done any Joomla work).
Good luck!
Paul
[EDIT: Gah...ignore the P.S. as I didn't notice you don't have an easy way to get redirects into the Zeus server before Joomla kicks in. Sorry]
P.S. A final quick option would be to write a redirect in htaccess to 301-redirect the fake URLs to a real 404 page. This would kick in before Joomla got a chance to interfere with its pseudo-redirect.
-
You're right, I guess I was focused on the index. Moz isn't showing any external links to these pages and neither is webmaster tools. My feeling is that google is retaining them for some reason, maybe just the keywords in the url?
-
I've checked the source of the visits and they are only coming form google searches for "cheap adobe" and the like. The original malware used the site to get these searches into the index and then direct them to other sites/pages.
Being a Zeus server it doesn't use htaccess, my task would be a lot simple if it did. It has an alternative rewrite file but documentation is scarce on using it for 404's.
I'll keep researching.
-
That means no body clicks on them, but how did google find them? This is not evidence there is no links, just that no one has visited your site thought them
-
Thanks Paul
I've checked analytics and the only source of these url's is google organic searches, not external sites. I think unfortunately my problem is the dynamic nature of Joomla and a combination of a number of factors that are causing it to do this in an SEO unfriendly way.
I think my biggest challenge is getting the URL's to 404 before I submit them to the web master removal tool (which my research tells me needs to be done before you submit). I think I read there might be a robots.txt option so I'll look into that.
Ian
-
These pages may have links from other spam sites, you don't want them to return a 200.
You want them to 404, in joomla you can make the site use htaccess or not, make sure it dose and 404 the pages there. -
Thanks Alan
This seems to be done by the combination of Joomla/Zeus and the redirection manager. No longer infected, the only visits are from organic searches from google and it's been a couple of months. Whatever the reason Joomla feels it shouldn't 404 these pages and just displays (not 301 redirects them) to the home page.
My feeling is that these URL's in the index and the visits from them probably aren't doing the site any good.
-
Thanks Dave
I think this might be a good option but I have a couple of problem with trying to achieve this. It's a joomla cms running on a zeus server with a Search Engine Friendly URL plugin running. I think that is possibly the worst combination of technologies for SEO in history. The combination of url rewrites in zeus and the redirection manager in Zeus just display the home page with the dodgey URL and give it a 200 status code. I think this is why google is taking so long to drop it from the index.
Ian
-
You absolutely do NOT want to redirect these links to the home page, Ian! These are spam links, coming from completely unrelated sites. They are Google's very definition of unnatural links and 301-redirecting them to your home page also redirects their potential damage to your home page.
You want them to return 404 status as quickly as possible. I'd also be tempted to use the Webmaster Tools remove tool to try to speed up the process, especially if these junk links currently form a large percentage of your overall link profile. (You'll need to find & remove the redirect that currently re-points them to the home page too, for the 404 header to do it's job of telling the search engines to drop the page from their indexes.)
As far as rankings issues, this isn't a potential dupe content issue, it's a damaging unnatural links issue, which is even more significant. These are the kinds of links that could lead to at least algorithmic penalty, or worst case, manual penalty. Either way, these penalties are vastly harder to fix after the fact than to avoid them in the first place.
In addition to the steps above designed to make it clear those links don't belong to your site, I'd keep a good record of the links, their originating domains, and when & how they were originally created due to the malware attack and your fix. That way you have essential documentation should you receive a penalty and need to submit a reinclusion request.
Hope that answers your questions?
Paul
-
why are they redirecting back to home page? do you redirect them or are you still infected?
I would make sure they 404
-
The easiest way would be a permanent re-direct on the offending URLs.
Check the incoming variable i.e. vc and permanently re-direct if it's an offending using 301.Google when seeing the 301 will drop the URL from the index.
There is a URL removal tool in Google Web Master Tools if the URL contains any personal information.
I had a similar issue a few days ago, the index is already starting to clear up, from a corrupt XML site map.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I "no-index" two exact pages on Google results?
Hello everyone, I recently started a new wordpress website and created a static homepage. I noticed that on Google search results, there are two different URLs landing on same content page. I've attached an image to explain what I saw. Should I "no-index" the page url? Google url.JPG In this picture, the first result is the homepage and I try to rank for that page. The last result is landing on same content with different URL. So, should I no-index last result as shown in image?
Technical SEO | | amanda59640 -
Google tries to index non existing language URLs. Why?
Hi, I am working for a SAAS client. He uses two different language versions by using two different subdomains.
Technical SEO | | TheHecksler
de.domain.com/company for german and en.domain.com for english. Many thousands URLs has been indexed correctly. But Google Search Console tries to index URLs which were never existing before and are still not existing. de.domain.com**/en/company
en.domain.com/de/**company ... and an thousand more using the /en/ or /de/ in between. We never use this variant and calling these URLs will throw up a 404 Page correctly (but with wrong respond code - we`re fixing that 😉 ). But Google tries to index these kind of URLs again and again. And, I couldnt find any source of these URLs. No Website is using this as an out going link, etc.
We do see in our logfiles, that a Screaming Frog Installation and moz.com w opensiteexplorer were trying to access this earlier. My Question: How does Google comes up with that? From where did they get these URLs, that (to our knowledge) never existed? Any ideas? Thanks 🙂0 -
Desktop & Mobile XML Sitemap Submitted But Only Desktop Sitemap Indexed On Google Search Console
Hi! The Problem We have submitted to GSC a sitemap index. Within that index there are 4 XML Sitemaps. Including one for the desktop site and one for the mobile site. The desktop sitemap has 3300 URLs, of which Google has indexed (according to GSC) 3,000 (approx). The mobile sitemap has 1,000 URLs of which Google has indexed 74 of them. The pages are crawlable, the site structure is logical. And performing a Landing Page URL search (showing only Google/Organic source/medium) on Google Analytics I can see that hundreds of those mobile URLs are being landed on. A search on mobile for a longtail keyword from a (randomly selected) page shows a result in the SERPs for the mobile page that judging by GSC has not been indexed. Could this be because we have recently added rel=alternate tags on our desktop pages (and of course corresponding canonical ones on mobile). Would Google then 'not index' rel=alternate page versions? Thanks for any input on this one. PmHmG
Technical SEO | | AlisonMills0 -
Robots.txt & meta noindex--site still shows up on Google Search
I have set up my robots.txt like this: User-agent: *
Technical SEO | | RoxBrock
Disallow: / and I have this meta tag in my on a Wordpress site, set up with SEO Yoast name="robots" content="noindex,follow"/> I did "Fetch as Google" on my Google Search Console My website is still showing up in the search results and it says this: "A description for this result is not available because of this site's robots.txt" This site has not shown up for years and now it is ranking above my site that I want to rank for this keyword. How do I get Google to ignore this site? This seems really weird and I'm confused how a site with little content, that has not been updated for years can rank higher than a site that is constantly updated and improved.1 -
How do we keep Google from treating us as if we are a recipe site rather than a product website?
We sell food products that, of course, can be used in recipes. As a convenience to our customer we have made a large database of recipes available. We have far more recipes than products. My concern is that Google may start viewing us as a recipe website rather than a food product website. My initial thought was to subdomain the recipes (recipe.domain.com) but that seems silly given that you aren't really leaving our website and the layout of the website doesn't change with the subdomain. Currently our URL structure is... domain.com/products/product-name.html domain.com/recipes/recipe-name.html We do rank well for our products in general searches but I want to be sure that our recipe setup isn't detrimental.
Technical SEO | | bearpaw0 -
How to Remove /feed URLs from Google's Index
Hey everyone, I have an issue with RSS /feed URLs being indexed by Google for some of our Wordpress sites. Have a look at this Google query, and click to show omitted search results. You'll see we have 500+ /feed URLs indexed by Google, for our many category pages/etc. Here is one of the example URLs: http://www.howdesign.com/design-creativity/fonts-typography/letterforms/attachment/gilhelveticatrade/feed/. Based on this content/code of the XML page, it looks like Wordpress is generating these: <generator>http://wordpress.org/?v=3.5.2</generator> Any idea how to get them out of Google's index without 301 redirecting them? We need the Wordpress-generated RSS feeds to work for various uses. My first two thoughts are trying to work with our Development team to see if we can get a "noindex" meta robots tag on the pages, by they are dynamically-generated pages...so I'm not sure if that will be possible. Or, perhaps we can add a "feed" paramater to GWT "URL Parameters" section...but I don't want to limit Google from crawling these again...I figure I need Google to crawl them and see some code that says to get the pages out of their index...and THEN not crawl the pages anymore. I don't think the "Remove URL" feature in GWT will work, since that tool only removes URLs from the search results, not the actual Google index. FWIW, this site is using the Yoast plugin. We set every page type to "noindex" except for the homepage, Posts, Pages and Categories. We have other sites on Yoast that do not have any /feed URLs indexed by Google at all. Side note, the /robots.txt file was previously blocking crawling of the /feed URLs on this site, which is why you'll see that note in the Google SERPs when you click on the query link given in the first paragraph.
Technical SEO | | M_D_Golden_Peak0 -
Multiple Domains, Same IP address, redirecting to preferred domain (301) -site is still indexed under wrong domains
Due to acquisitions over time and the merging of many microsites into one major site, we currently have 20+ TLD's pointing to the same IP address as our "preferred domain:" for our consolidated website http://goo.gl/gH33w. They are all set up as 301 redirects on apache - including both the www and non www versions. When we launched this consolidated website, (April 2010) we accidentally left the settings of our site open to accept any of our domains on the same IP. This was later fixed but unfortunately Google indexed our site under multiple of these URL's (ignoring the redirects) using the same content from our main website but swapping out the domain. We added some additional redirects on apache to redirect these individual pages pages indexed under the wrong domain to the same page under our main domain http://goo.gl/gH33w. This seemed to help resolve the issue and moved hundreds of pages off the index. However, in December of 2010 we made significant changes in our external dns for our ip addresses and now since December, we see pages indexed under these redirecting domains on the rise again. If you do a search query of : site:laboratoryid.com you will see a few hundred examples of pages indexed under the wrong domain. When you click on the link, it does redirect to the same page but under the preferred domain. So the redirect is working and has been confirmed as 301. But for some reason Google continues to crawl our site and index under this incorrect domains. Why is this? Is there a setting we are missing? These domain level and page level redirects should be decreasing the pages being indexed under the wrong domain but it appears it is doing the reverse. All of these old domains currently point to our production IP address where are preferred domain is also pointing. Could this be the issue? None of the pages indexed today are from the old version of these sites. They only seem to be the new content from the new site but not under the preferred domain. Any insight would be much appreciated because we have tried many things without success to get this resolved.
Technical SEO | | sboelter0 -
Dynamically-generated .PDF files, instead of normal pages, indexed by and ranking in Google
Hi, I come across a tough problem. I am working on an online-store website which contains the functionlaity of viewing products details in .PDF format (by the way, the website is built on Joomla CMS), now when I search my site's name in Google, the SERP simply displays my .PDF files in the first couple positions (shown in normal .PDF files format: [PDF]...)and I cannot find the normal pages there on SERP #1 unless I search the full site domain in Google. I really don't want this! Would you please tell me how to figure the problem out and solve it. I can actually remove the corresponding component (Virtuemart) that are in charge of generating the .PDF files. Now I am trying to redirect all the .PDF pages ranking in Google to a 404 page and remove the functionality, I plan to regenerate a sitemap of my site and submit it to Google, will it be working for me? I really appreciate that if you could help solve this problem. Thanks very much. Sincerely SEOmoz Pro Member
Technical SEO | | fugu0