Articles marked with "This site may be hacked," but I have no security issues in the search console. What do I do?
-
There are a number of blog articles on my site that have started receiving the "This site may be hacked" warning in the SERP.
I went hunting for security issues in the Search Console, but it indicated that my site is clean. In fact, the average position of some of the articles has increased over the last few weeks while the warning has been in place.
The problem sounds very similar to this thread: https://productforums.google.com/forum/#!category-topic/webmasters/malware--hacked-sites/wmG4vEcr_l0 but that thread hasn't been touched since February. I'm fearful that the Google Form is no longer monitored.
What other steps should I take?
One query where I see the warning is "Brand Saturation" and this is the page that has the warning: http://brolik.com/blog/should-you-strive-for-brand-saturation-in-your-marketing-plan/
-
Thanks, Paul. We started resubmitting the cleaned pages yesterday. I passed your comments about the Apache install and the old version of PHP to the devs as well.
At the very least, this is a great learning experience for us. It's great to have such a helpful community.
-
It looks like the devs have cleaned up most of the obvious stuff, Matthew, so I'd get to work resubmitting the pages that were marked as hacked but now longer show that issue.
Do make sure the devs keep working on finding and cleaning up attack vectors (or just bite the bullet and pay for a year of Sucuri cleanup and protection) but it's important to get those marked pages discovered as clean before too much longer.
Also of note - your site's server's Apache install is quite a bit out of date and you're running a very old version of PHP as well that hasn't been getting even security updates for over a year. Those potential attack vectors need to be addressed right away too.
Good luck getting back into Big G's good graces!
Paul
P.S. Easy way to find the pages marked as hacked for checking/resubmission is a "site:" search e.g. enter **site:brolik.com **into a Google search.
P.P.S. Also noted that you have many pages from brolik-temp.com also still indexed. The domain name just expired yesterday, but the indexed pages showed a 302-redirect to the main domain, according to the Wayback Machine. These should be 301s in order to help get the pages to eventually drop out of the SERPS. (And with 301s in place, you could either submit a "Change of Address" for that domain in Webmaster Tools/GSC or you do a full removal request. Either way, I wouldn't want those test domain pages to remain in the indexes.
-
Thank you, Paul. That was going to be my next question: what to do when the blog is clean.
Unfortunately, the dev's are still frantically pouring through code hunting for the problem. Hopefully they find it soon.
-
Just a heads-up that you'll want to get this cleaned up as quickly as possible, Matthew. Time really is of the essence here.
Once this issue is recognised by the crawler as being widespread enough to trigger a warning in GSC, it can take MONTHS to get the hacked warning removed from the SERPS after cleanup.
Get the hack cleaned up, then immediately start submitting the main pages of the site back to Fetch as Google tool to get them recrawled and detected as clean.
I recently went through a very similar situation with a client and was able to get the hacked notification removed for most URLs within 3 and 4 days of cleanup.
Paul
-
Passed it on to the dev. Thanks for the response.
I'll let you know if they run into any trouble cleaning it up.
-
It is hacked, you just have to look at the page as Googlebot. Sadly, I have seen this before.
If you set your user agent as Googlebot - you will see a different page (see attached images). Note that the Title, H1 tags and content are updated to show info on how to Buy Zithromax. This is a JS insertion hack where when the user agent is shown as Googlebot they overwrite your content and insert links to pages to help gain links. This is very black hat and bad and yes scary. (See attached images below)
I use "User Agent Switcher" on FF to set my user agent - there are lots of other tools for FF and Chrome to do this. You can also run a spider on your site such as screaming frog and set the user agent to Googlebot and you will see all the changed H1s and title tags,
It is clever as "humans" will not see this, but the bots will so it is hard to detect. Also, if you have multiple servers, you may only have 1 of the servers impacted and so you may not see this each time depending on what server your load balancer is sending you to. You may want to use Fetch as Google in Webmaster console and see what Google sees.
This is very serious, show this to your dev and get it fixed ASAP. You can PM me if you need more information etc.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Subdirectory site / 301 Redirects / Google Search Console
Hi There, I'm a web developer working on an existing WordPress site (Site #1) that has 900 blog posts accessible from this URL structure: www.site-1.com/title-of-the-post We've built a new website for their content (Site #2) and programmatically moved all blog posts to the second website. Here is the URL structure: www.site-1.com/site-2/title-of-the-post Site #1 will remain as a normal company site without a blog, and Site #2 will act as an online content membership platform. The original 900 posts have great link juice that we, of course, would like to maintain. We've already set up 301 redirects that take care of this process. (ie. the original post gets redirected to the same URL slug with '/site-2/' added. My questions: Do you have a recommendation about how to best handle this second website in Google Search Console? Do we submit this second website as an additional property in GSC? (which shares the same top-level-domain as the original) Currently, the sitemap.xml submitted to Google Search Console has all 900 blog posts with the old URLs. Is there any benefit / drawback to submitting another sitemap.xml from the new website which has all the same blog posts at the new URL. Your guidance is greatly appreciated. Thank you.
Intermediate & Advanced SEO | | HimalayanInstitute0 -
"WWW" versus non "WWW" on domain
We plan on migrating our site to a new shorter domain name. I like the idea of removing "www" to gain an additional 3 letters in the URL display. Is there any disadvantage of doing so from a technical or SEO perspective? Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan0 -
On-site Search - Revisited (again, *zZz*)
Howdy Moz fans! Okay so there's a mountain of information out there on the webernet about internal search results... but i'm finding some contradiction and a lot of pre-2014 stuff. Id like to hear some 2016 opinion and specifically around a couple of thoughts of my own, as well as some i've deduced from other sources. For clarity, I work on a large retail site with over 4 million products (product pages), and my predicament is thus - I want Google to be able to find and rank my product pages. Yes, I can link to a number of the best ones by creating well planned links via categorisation, silos, efficient menus etc (done), but can I utilise site search for this purpose? It was my understanding that Google bots don't/can't/won't use a search function... how could it? It's like expeciting it to find your members only area, it can't login! How can it find and index the millions of combinations of search results without typing in "XXXXL underpants" and all the other search combinations? Do I really need to robots.txt my search query parameter? How/why/when would googlebot generate that query parameter? Site Search is B.A.D - I read this everywhere I go, but is it really? I've read - "It eats up all your search quota", "search results have no content and are classed as spam", "results pages have no value" I want to find a positive SEO output to having a search function on my website, not just try and stifle Mr Googlebot. What I am trying to learn here is what the options are, and what are their outcomes? So far I have - _Robots.txt - _Remove the search pages from Google _No Index - _Allow the crawl but don't index the search pages. _No Follow - _I'm not sure this is even a valid idea, but I picked it up somewhere out there. _Just leave it alone - _Some of your search results might get ranked and bring traffic in. It appears that each and every option has it's positive and negative connotations. It'd be great to hear from this here community on their experiences in this practice.
Intermediate & Advanced SEO | | Mark_Elton0 -
B2B site targeting 20,000 companies with 20,000 dedicated "target company pages" on own website.
An energy company I'm working with has decided to target 20,000 odd companies on their own b2b website, by producing a new dedicated page per target company on their website - each page including unique copy and a sales proposition (20,000 odd new pages to optimize! Yikes!). I've never come across such an approach before... what might be the SEO pitfalls (other than that's a helluva number of pages to optimize!). Any thoughts would be very welcome.
Intermediate & Advanced SEO | | McTaggart0 -
"Jump to" Links in Google, how do you get them?
I have just seen yoast.com results in Google and noticed that nearly all the indexed pages show a "Jump to" link So instead of showing the full URL under the title tag, it shows these type of links yoast.com › SEO
Intermediate & Advanced SEO | | JohnPeters
yoast.com › Social Media
yoast.com › Analytics With the SEO, Social Media and Analytics all being clickable. How has he achieved this? And is it something to try and incorporate in my sites?0 -
Should "View All Products" be the canonical page?
We currently have "view 12" as the default setting when someone arrives to www.mysite.com/subcategory-page.aspx. We have been advised to change the default to "view all products" and make that the canonical page to ensure all of our products get indexed. My concern is that doing this will increase the page load time and possibly hurt rankings. Does it make sense to change all our our subcategory pages to show all the products when someone visits the page? Most sites seem to have a smaller number of products as the default.
Intermediate & Advanced SEO | | pbhatt0 -
What does "base" link mean here?
On http://www.google.com/support/webmasters/bin/answer.py?answer=139394, it says: rel="canonical" can be used with relative or absolute links, but we recommend using absolute links to minimize potential confusion or difficulties. If your document specifies a base link, any relative links will be relative to that base link. Where would a document specify a base link? And how?
Intermediate & Advanced SEO | | nicole.healthline0