Articles marked with "This site may be hacked," but I have no security issues in the search console. What do I do?
-
There are a number of blog articles on my site that have started receiving the "This site may be hacked" warning in the SERP.
I went hunting for security issues in the Search Console, but it indicated that my site is clean. In fact, the average position of some of the articles has increased over the last few weeks while the warning has been in place.
The problem sounds very similar to this thread: https://productforums.google.com/forum/#!category-topic/webmasters/malware--hacked-sites/wmG4vEcr_l0 but that thread hasn't been touched since February. I'm fearful that the Google Form is no longer monitored.
What other steps should I take?
One query where I see the warning is "Brand Saturation" and this is the page that has the warning: http://brolik.com/blog/should-you-strive-for-brand-saturation-in-your-marketing-plan/
-
Thanks, Paul. We started resubmitting the cleaned pages yesterday. I passed your comments about the Apache install and the old version of PHP to the devs as well.
At the very least, this is a great learning experience for us. It's great to have such a helpful community.
-
It looks like the devs have cleaned up most of the obvious stuff, Matthew, so I'd get to work resubmitting the pages that were marked as hacked but now longer show that issue.
Do make sure the devs keep working on finding and cleaning up attack vectors (or just bite the bullet and pay for a year of Sucuri cleanup and protection) but it's important to get those marked pages discovered as clean before too much longer.
Also of note - your site's server's Apache install is quite a bit out of date and you're running a very old version of PHP as well that hasn't been getting even security updates for over a year. Those potential attack vectors need to be addressed right away too.
Good luck getting back into Big G's good graces!
Paul
P.S. Easy way to find the pages marked as hacked for checking/resubmission is a "site:" search e.g. enter **site:brolik.com **into a Google search.
P.P.S. Also noted that you have many pages from brolik-temp.com also still indexed. The domain name just expired yesterday, but the indexed pages showed a 302-redirect to the main domain, according to the Wayback Machine. These should be 301s in order to help get the pages to eventually drop out of the SERPS. (And with 301s in place, you could either submit a "Change of Address" for that domain in Webmaster Tools/GSC or you do a full removal request. Either way, I wouldn't want those test domain pages to remain in the indexes.
-
Thank you, Paul. That was going to be my next question: what to do when the blog is clean.
Unfortunately, the dev's are still frantically pouring through code hunting for the problem. Hopefully they find it soon.
-
Just a heads-up that you'll want to get this cleaned up as quickly as possible, Matthew. Time really is of the essence here.
Once this issue is recognised by the crawler as being widespread enough to trigger a warning in GSC, it can take MONTHS to get the hacked warning removed from the SERPS after cleanup.
Get the hack cleaned up, then immediately start submitting the main pages of the site back to Fetch as Google tool to get them recrawled and detected as clean.
I recently went through a very similar situation with a client and was able to get the hacked notification removed for most URLs within 3 and 4 days of cleanup.
Paul
-
Passed it on to the dev. Thanks for the response.
I'll let you know if they run into any trouble cleaning it up.
-
It is hacked, you just have to look at the page as Googlebot. Sadly, I have seen this before.
If you set your user agent as Googlebot - you will see a different page (see attached images). Note that the Title, H1 tags and content are updated to show info on how to Buy Zithromax. This is a JS insertion hack where when the user agent is shown as Googlebot they overwrite your content and insert links to pages to help gain links. This is very black hat and bad and yes scary. (See attached images below)
I use "User Agent Switcher" on FF to set my user agent - there are lots of other tools for FF and Chrome to do this. You can also run a spider on your site such as screaming frog and set the user agent to Googlebot and you will see all the changed H1s and title tags,
It is clever as "humans" will not see this, but the bots will so it is hard to detect. Also, if you have multiple servers, you may only have 1 of the servers impacted and so you may not see this each time depending on what server your load balancer is sending you to. You may want to use Fetch as Google in Webmaster console and see what Google sees.
This is very serious, show this to your dev and get it fixed ASAP. You can PM me if you need more information etc.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console Site Property Questions
I have a few questions regarding Google Search Console. Google Search Console tells you to add all versions of your website https, http, www, and non-www. 1.) Do I than add ALL the information for ALL versions? Sitemaps, preferred site, etc.? 2.) If yes, when I add sitemaps to each version, do I add the sitemap url of the site version I'm on or my preferred version? - For instance when adding a sitemap to a non-www version of the site, do I use the non-www version of the sitemap? Or since I prefer a https://www.domain.com/sitemap.xml do I use it there? 3.) When adding my preferred site (www or non-www) do I use my preferred site on all site versions? (https, http, www, and non-www) Thanks in advance. Answers vary throughout Google!
Intermediate & Advanced SEO | | Mike.Bean0 -
Hacked Wordpress Site! So many 404s
So I had a site that I worked on get hacked. We eliminated the URLs, found the vulnerability (Bluehost!) and rolled back the site. BUT they got into the Google Search Console and indexed a LOT of pages. These pages are now 404 errors and I asked the robots.txt file to make them noindex. The problem is that Google is placing a "this site may be hacked" on the search listing. I asked Google to reevaluate it and it was approved by there are still 80,000 404 errors being shown and it still believes that the uploaded files that we deleted should be showing. Doing a site search STILL shows the infected pages though and it has been a month. Any insight would definitely be helpful. Thanks!
Intermediate & Advanced SEO | | mattdinbrooklyn0 -
Link Building for "State" informational pages
I have a webpage for all 50 states for specific info relating to relocation and was wondering if there are any recommended links to work at getting for these pages. I would like to do "state" specific and possibly health related links for each page to help in the SEO rankings. I can see that if I just wanted to get 10 links on each page that is going to be 500 links I have to build and it is going to be very time consuming but I feel it is necessary. Thank you, Poo
Intermediate & Advanced SEO | | Boodreaux0 -
What is the proper syntax for rel="canonical" ??
I believe the proper syntax is like this [taken from the SEOMoz homepage]: However, one of the sites I am working on has all of their canonical tags set up like this: I should clarify, not all of their canonicals are identical to this one, they simply use this naming convention, which appears to be relative URLs instead of absolute. Doesn't the entire URL need to be in the tag? If that is correct, can you also provide me with an explanation that I can give to management please? They hate it when I say "Because I said so!" LOL
Intermediate & Advanced SEO | | danatanseo0 -
"nocontent" class use for Google Custom Search: SEO Ramifications?
Hi all, Have a client that uses Google Custom Search tool which is crawling, indexing and returning millions of irrelevant results for keywords that are on every page of the site. IT/Web dev. team is considering adding a class attribute to prohibit Google Custom Search from indexing bolierplate content regions. Here's the link to Google's custom search help page: http://support.google.com/customsearch/bin/answer.py?hl=en&answer=2364585 "...If your pages have regions containing boilerplate content that's not relevant to the main content of the page, you can identify it using the nocontent class attribute. When Google Custom Search sees this tag, we'll ignore any keywords it contains and won't take them into account when calculating ranking for your Custom Search engine. (We'll still follow and crawl any links contained in the text marked nocontent.) To use the nocontent class attribute, include the boilerplate content in a tag (for example, span or div) like this: Google Custom Search also notes:"Using nocontent won't impact your site's performance in Google Web Search, or our crawling of your site, in any way. We'll continue to follow any links in tagged content; we just won't use keywords to calculate ranking for your Custom Search engine."Just want to confirm if anyone can forsee any SEO implications the use of this div could create? Anyone have experience with this?Thank you!
Intermediate & Advanced SEO | | MRM-McCANN0 -
1 of the sites i work on keeps having its home page "de-indexed" by google every few months, I then apply for a review and they put it back up. But i have no idea why this keeps happening and its only the home page
1 of the sites i work on (www.eva-alexander.com) keeps having its home page "de-indexed" by google every few months, I then apply for a review and they put it back up. But i have no idea why this keeps happening and its only the home page I have no idea why and have never experienced this before
Intermediate & Advanced SEO | | GMD10 -
On-Site Optimization Tips for Job site?
I am working on a job site that only ranks well for the homepage with very low ranking internal pages. My job pages do not rank what so ever and are database driven and often times turn to 404 pages after the job has been filled. The job pages have to no content either. Anybody have any technical on-site recommendations for a job site I am working on especially regarding my internal pages? (Cross Country Allied.com)
Intermediate & Advanced SEO | | Melia0 -
Detailed Revisions of Articles coexisting with Automated Description Articles
Hello all, think per instance in a comparator of cars, motorbikes, etc, where you have dozens of brands, types of cars and motorbikes like diesel or oil, 4x4 vs sport, etc So, in one part of your site you are reviewing them in detail, explaining everything. You also have a database with hundreds of models with several specs like top speed, length, engine, etc so you can automatically create an info page for these hundreds of models. How would you make both of them live together in your website? If you add the review to the automatted articles, then you would have an unconsistency as you cannot manually review all the products. On the other hand, doing it separetly will lead to a very, very similar title posts and urls (revision vs automated versions). In my particular case, I just had the revisions until now and my site is developed in Wordpress. I had all the url posts below the home (mysite.com/review-of-car-x-of-brand-y) and now I am going to add the automatted ones and am thinking on place the automatted ones like WP Custom Posts and the url would be mysite.com/cars/description-of-car-x-of-brand-y. But still have the problem with categories, tags, etc, etc Well, it is long question but what do you think about this?
Intermediate & Advanced SEO | | antorome1