Open Site Explorer - Top Pages that don't exist / result of a hack(?)
-
Hi all,
Last year, a website I monitor, got hacked, or infected with malware, I’m not sure which.
The result that I got to see is 100’s of ‘not found’ entries in Google Search Console / Crawl Errors for non-existent pages relating to / variations of ‘Canada Goose’. And also, there's a couple of such links showing up in SERPs. Here’s an example of the page URLs:
ourdomain.com/canadagoose.php ourdomain.com/replicacanadagoose.php
I looked for advice on the webmaster forums, and was recommended to just keep marking them as ‘fixed’ in the console. Sooner or later they’ll disappear. Still, a year after, they appear.
I’ve just signed up for a Moz trail and, in Open Site Explorer->Top Pages, the top 2-5 pages are relating to these non-existent pages: URLs that are the result of this ‘canada goose’ spam attack. The non-existent pages each have around 10 Linking Root Domains, with around 50 Inbound Links.
My question is: Is there a more direct action I should take here? For example, informing Google of the offending domains with these backlinks.
Any thoughts appreciated! Many thanks
-
Hi Mª Verónica B
That's great, Many thanks for the confirmation.
All the best,
Colin
-
Hi Colin,
If the backlinks/inbound links are spam, yes upload a disavow file, only related to those.
If multiple ghost pages in WordPress due to erased hacked pages, yes the new hidden page with all the above instructions, only related to the spam pages.
All the best,Mª Verónica B.
-
Thanks again Mª Veronica for taking the time to respond.
Ok, if i understand correctly, as those spam / 'canadagoose' related backlinks do indeed exist* , a disavow file for google would be the thing to do here?
There was indeed a hacking, which happened before i came along, which is reported in Google Search Console. And there are 100's of 'canadagoose' related crawl errors with a response code of 404 that just keep coming back. It looks like those pages did indeed once exist, and must have been deleted by the website developers. So the 'empty page' technique would apply here?
*It seems to me that the 'canadagoose' pages that have apparently since been deleted , and the backlinks linking to those 'ghost' pages, are all part of the hack:
- hack website, create 'canadagoose' pages
- link to 'canadagoose' pages from other websites
Many thanks,
Colin
-
Hi Colin,
Not exactly!
We are not talking about backlinks. Backlinks come from other websites, therefore we cannot control them, except upload a disavow file for Google.
That is quite different.We are talking about the hundred of "ghosts" of deleted pages - we deleted them, because our website was hacked.
At the time we deleted all those, that is not enough.
Crawlers will "see" 330 or more pages with 404 status!
That is awful for SEO, due to the crawlers/Google "understands" that you do not care about user experience, means you have so many erased pages that if somebody goes there, there will be nothing.1.- Moz Top Pages to find out all the spam pages and then Google, of course to be sure.
2.- A new page, not completely empty. Should say something like
"We are truly sorry... Thanks."
This is for the crawlers, it is supposed no human knows about those spam pages. Except the one that hacked your website.
3.- Redirect all spam pages in the list, with a 301 - a permanent redirect to the noindex/nofollow page you just created.
4:- Verify, copy and past from the list into the navigator and check if goes to the new page, also verify the page status with Moz bar.Thanks. Good luck,
Mª Verónica
-
Many thanks for your response Mª Veronica B, very helpful.
I've never used the disavow backlinks tool in Google Search Console. I would have assumed this is the ideal scenario to use it to disavow _specific _backlinks (not _all _backlinks). But instead what you're suggesting is:
-
create an empty / hidden (WordPress) Page, and make it noindex / nofollow
-
Get a list of all spam backlinks from Google Search Console
-
Redirect all spam backlinks in the list to the empty noindex / nofollow page
This would never have occurred to me, I'm going to do this right now.
Again, many thanks!
Colin
-
-
Hi,
It seems that the website has a similar situation as the one that I shared before.
Although, I had to take immediate action due to it was creating a very serious problem by sending malicious signals to all the crawlers.
Also, I discovered the issue by using the same Moz feature.
Thanks Moz!https://moz.com/community/q/more-than-450-pages-created-by-a-hacker
In my experience, by sending all those pages to a new hidden page, using a 301 and the noindex and nofollow directives. It is, somehow, sending the right signals to the crawlers of Google and the other search engines.
Let's say strongly informing all the crawlers, that those spam pages/404 are not relevant nor interesting for your website.
Andy's response agrees that is the best solution. Also, he recommends the Wordfence plugin for WordPress as a preventive measure to avoid further issues.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Fetch as Google -- Does not result in pages getting indexed
I run a exotic pet website which currently has several types of species of reptiles. It has done well in SERP for the first couple of types of reptiles, but I am continuing to add new species and for each of these comes the task of getting ranked and I need to figure out the best process. We just released our 4th species, "reticulated pythons", about 2 weeks ago, and I made these pages public and in Webmaster tools did a "Fetch as Google" and index page and child pages for this page: http://www.morphmarket.com/c/reptiles/pythons/reticulated-pythons/index While Google immediately indexed the index page, it did not really index the couple of dozen pages linked from this page despite me checking the option to crawl child pages. I know this by two ways: first, in Google Webmaster Tools, if I look at Search Analytics and Pages filtered by "retic", there are only 2 listed. This at least tells me it's not showing these pages to users. More directly though, if I look at Google search for "site:morphmarket.com/c/reptiles/pythons/reticulated-pythons" there are only 7 pages indexed. More details -- I've tested at least one of these URLs with the robot checker and they are not blocked. The canonical values look right. I have not monkeyed really with Crawl URL Parameters. I do NOT have these pages listed in my sitemap, but in my experience Google didn't care a lot about that -- I previously had about 100 pages there and google didn't index some of them for more than 1 year. Google has indexed "105k" pages from my site so it is very happy to do so, apparently just not the ones I want (this large value is due to permutations of search parameters, something I think I've since improved with canonical, robots, etc). I may have some nofollow links to the same URLs but NOT on this page, so assuming nofollow has only local effects, this shouldn't matter. Any advice on what could be going wrong here. I really want Google to index the top couple of links on this page (home, index, stores, calculator) as well as the couple dozen gene/tag links below.
Intermediate & Advanced SEO | | jplehmann0 -
Site's pages has GA codes based on Tag Manager but in Screaming Frog, it is not recognized
Using Tag Assistant (Google Chrome add-on), we have found that the site's pages has GA codes. (also see screenshot 1) However, when we used Screaming Frog's filter feature -- Configuration > Custom > Search > Contain/Does Not Contain, (see screenshot 2) SF is displaying several URLs (maybe all) of the site under 'Does Not Contain' which means that in SF's crawl, the site's pages has no GA code. (see screenshot 3) What could be the problem why SF states that there is no GA code in the site's pages when in fact, there are codes based on Tag Assistant/Manager? Please give us steps/ways on how to fix this issue. Thanks! SgTovPf VQNOJMF RCtBibP
Intermediate & Advanced SEO | | jayoliverwright0 -
What if my site isn't ready for Mobile Armageddon by April 21st??
Hello Moz Experts, I am fighting for one of our sites to be mobile optimized, but the fight is taking longer than anticipated (need approval from higher ups). What happens if my site is not ready by April 21st? Will it take long to recover, like Penguin? Or, will the recovery be fairly quick? Say I release a mobile version of my site a week later. Then Google will have to reindex it and rank me again. How long will that take before I regain my traffic? Thanks,
Intermediate & Advanced SEO | | TMI.com0 -
Merging two existing company sites into one
Hi Moz community, I have recently started a new job for a Fire & Security company in the UK to help with their non existent SEO efforts. Currently they have two separate websites. One of the websites is for their services and the other website is for their eCommerce store selling fire alarm equipment etc. The eCommerce store is higher up in the SERPs and overall has a lot more links. It also uses a better branded domain name. As I have never attempted such a project I have a few questions. The current eCommerce store is hosted and maintained by another web company which uses their bespoke CMS. What I want to do is take the service website and merge it with the ecommerce domain, however the service site runs on wordpress, which I want to continue for its flexibility. The service page wants to be the new homepage with a link on it to go to the store. I just cant get my head around the whole operation so if anyone could give me some advice to point me in the right direction that would be great. Thanks
Intermediate & Advanced SEO | | BradNichol0 -
How to make an AJAX site crawlable when PushState and #! can't be used?
Dear Mozzers, Does anyone know a solution to make an AJAX site crawlable if: 1. You can't make use of #! (with HTML snapshots) due to tracking in Analytics 2. PushState can't be implemented Could it be a solution to create two versions of each page (one without #!, so campaigns can be tracked in Analytics & one with #! which will be presented to Google)? Or is there another magical solution that works as well? Any input or advice is highly appreciated! Kind regards, Peter
Intermediate & Advanced SEO | | ConversionMob0 -
Changing Domain / Site Name - An SEO Nightmare?
My company will be changing its name and moving to a new domain in about a month. What can I do from an SEO perspective to get ready for the big move? Do I just need to 301 redirect all my URLs to their new corresponding pages? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
Redirecting Existing Domains to My Main Site
Hi I have a main property related website featuring different countries around the world. I also have many different seperate country websites 20+. All keyword rich domains with a good 9 years+ domain age and PR3's with decent links and moz rankings and unique content. Many of the sites are very low Alexa rank now and receive little traffic. I don't have the time now to spend on each of the individual domains and am wanting to consolidate them and their PR juice to the corresponding country page of my main website. My question is - is it possible - will google see this as me trying to manipulate them and is my main site likely to suffer from any penalties or downgrading? Thanks for your input.
Intermediate & Advanced SEO | | freecall0 -
Is 404'ing a page enough to remove it from Google's index?
We set some pages to 404 status about 7 months ago, but they are still showing in Google's index (as 404's). Is there anything else I need to do to remove these?
Intermediate & Advanced SEO | | nicole.healthline0