Articles marked with "This site may be hacked," but I have no security issues in the search console. What do I do?
-
There are a number of blog articles on my site that have started receiving the "This site may be hacked" warning in the SERP.
I went hunting for security issues in the Search Console, but it indicated that my site is clean. In fact, the average position of some of the articles has increased over the last few weeks while the warning has been in place.
The problem sounds very similar to this thread: https://productforums.google.com/forum/#!category-topic/webmasters/malware--hacked-sites/wmG4vEcr_l0 but that thread hasn't been touched since February. I'm fearful that the Google Form is no longer monitored.
What other steps should I take?
One query where I see the warning is "Brand Saturation" and this is the page that has the warning: http://brolik.com/blog/should-you-strive-for-brand-saturation-in-your-marketing-plan/
-
Thanks, Paul. We started resubmitting the cleaned pages yesterday. I passed your comments about the Apache install and the old version of PHP to the devs as well.
At the very least, this is a great learning experience for us. It's great to have such a helpful community.
-
It looks like the devs have cleaned up most of the obvious stuff, Matthew, so I'd get to work resubmitting the pages that were marked as hacked but now longer show that issue.
Do make sure the devs keep working on finding and cleaning up attack vectors (or just bite the bullet and pay for a year of Sucuri cleanup and protection) but it's important to get those marked pages discovered as clean before too much longer.
Also of note - your site's server's Apache install is quite a bit out of date and you're running a very old version of PHP as well that hasn't been getting even security updates for over a year. Those potential attack vectors need to be addressed right away too.
Good luck getting back into Big G's good graces!
Paul
P.S. Easy way to find the pages marked as hacked for checking/resubmission is a "site:" search e.g. enter **site:brolik.com **into a Google search.
P.P.S. Also noted that you have many pages from brolik-temp.com also still indexed. The domain name just expired yesterday, but the indexed pages showed a 302-redirect to the main domain, according to the Wayback Machine. These should be 301s in order to help get the pages to eventually drop out of the SERPS. (And with 301s in place, you could either submit a "Change of Address" for that domain in Webmaster Tools/GSC or you do a full removal request. Either way, I wouldn't want those test domain pages to remain in the indexes.
-
Thank you, Paul. That was going to be my next question: what to do when the blog is clean.
Unfortunately, the dev's are still frantically pouring through code hunting for the problem. Hopefully they find it soon.
-
Just a heads-up that you'll want to get this cleaned up as quickly as possible, Matthew. Time really is of the essence here.
Once this issue is recognised by the crawler as being widespread enough to trigger a warning in GSC, it can take MONTHS to get the hacked warning removed from the SERPS after cleanup.
Get the hack cleaned up, then immediately start submitting the main pages of the site back to Fetch as Google tool to get them recrawled and detected as clean.
I recently went through a very similar situation with a client and was able to get the hacked notification removed for most URLs within 3 and 4 days of cleanup.
Paul
-
Passed it on to the dev. Thanks for the response.
I'll let you know if they run into any trouble cleaning it up.
-
It is hacked, you just have to look at the page as Googlebot. Sadly, I have seen this before.
If you set your user agent as Googlebot - you will see a different page (see attached images). Note that the Title, H1 tags and content are updated to show info on how to Buy Zithromax. This is a JS insertion hack where when the user agent is shown as Googlebot they overwrite your content and insert links to pages to help gain links. This is very black hat and bad and yes scary. (See attached images below)
I use "User Agent Switcher" on FF to set my user agent - there are lots of other tools for FF and Chrome to do this. You can also run a spider on your site such as screaming frog and set the user agent to Googlebot and you will see all the changed H1s and title tags,
It is clever as "humans" will not see this, but the bots will so it is hard to detect. Also, if you have multiple servers, you may only have 1 of the servers impacted and so you may not see this each time depending on what server your load balancer is sending you to. You may want to use Fetch as Google in Webmaster console and see what Google sees.
This is very serious, show this to your dev and get it fixed ASAP. You can PM me if you need more information etc.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are "feed" Backlinks an issue? - Vigorous Fickle in the rankings in past two months
Hi All! I have been observing a vigorous fickle in my rankings since past two months. Some first page keywords have moved to the second page. Some of my observations from the backlink audit rose below questions: Q1. Are large # of backlinks from "feed URLs" harmful in any way? If yes?
Intermediate & Advanced SEO | | Ishrat-Khan
Q2. Am I supposed to get webmasters take these down or block their own feed URL?
Q3. The backlinks come in huge numbers from reliable websites. Do I need to remove the backlinks just because of the huge number?
Q4. What factors to look for if rankings started fluctuating in past 2 months? Note: these backlinks from "feed" are from the websites who posted our editorials. Backlink Example: http://xyz.com/categories/abc/feed/0 -
Issue with site not being properly found in Google
We have a website [domain name removed] that is not being properly found in Google. When we run it through Screaming Frog, it indicates that there is a problem with the robot.txt file. However, I am unsure exactly what this problem is, and why this site is no longer properly being found. Any help here on how to resolve this would be appreciated!
Intermediate & Advanced SEO | | Gavo1 -
Google Search Analytics How to Get Search Keywords for a Page?
How do I get the keywords coming into a page on the new Google Webmaster Tools Search Analytics? Used to be there in the old version. You would just view your most popular urls and when you expanded the urls you would see the terms driving the traffic. How do I see the most popular keyword queries for a given page in the new tool? Alternatively can I still use the old tool somehow?
Intermediate & Advanced SEO | | K-WINTER0 -
Site Migration of 4 sites into 1?
Hi Guys, I have a massive project involving a migration of 4 sites into 1. 4 sites include: **www.MainSite.com ** www.E-commerce.com www.Membership.com www.ResearchStudy.com Goal of this project is to have 1-4 regrouped into Main Site I will be following the best practice from this post https://moz.com/blog/web-site-migration-guide-tips-for-seos which has an awesome checklist. I am actually about to start Phase 3: URL redirect mapping. Because all of these sites have hundreds of duplicates, I figured I should first resolve the Main Site dup issues before creating the URL redirect mapping but what about the other domains (2,3,4) though? Should I first resolve the Dup issues on those ones as well or it is not necessary since they will be pointing into the Main Site new domain? I want to make sure I don't overwork the programming team and myself. Thanks For sharing your expertise and any tips on how should I move forward with this.
Intermediate & Advanced SEO | | Ideas-Money-Art0 -
De-indexing product "quick view" pages
Hi there, The e-commerce website I am working on seems to index all of the "quick view" pages (which normally occur as iframes on the category page) as their own unique pages, creating thousands of duplicate pages / overly-dynamic URLs. Each indexed "quick view" page has the following URL structure: www.mydomain.com/catalog/includes/inc_productquickview.jsp?prodId=89514&catgId=cat140142&KeepThis=true&TB_iframe=true&height=475&width=700 where the only thing that changes is the product ID and category number. Would using "disallow" in Robots.txt be the best way to de-indexing all of these URLs? If so, could someone help me identify how to best structure this disallow statement? Would it be: Disallow: /catalog/includes/inc_productquickview.jsp?prodID=* Thanks for your help.
Intermediate & Advanced SEO | | FPD_NYC0 -
What does "base" link mean here?
On http://www.google.com/support/webmasters/bin/answer.py?answer=139394, it says: rel="canonical" can be used with relative or absolute links, but we recommend using absolute links to minimize potential confusion or difficulties. If your document specifies a base link, any relative links will be relative to that base link. Where would a document specify a base link? And how?
Intermediate & Advanced SEO | | nicole.healthline0