Access Denied
-
Our website which was ranking at number 1 in Google.co.uk for our 2 main search terms for over three years was hacked into last November. We rebuilt the site but had slipped down to number 4. We were hacked again 2 weeks ago and are now at number 7.
I realise that this drop may not be just a result of the hacking but it cant' have helped.
I've just access our Google Webmaster Tools accounts and these are the current results:
940 Access Denied Errors
197 Not Found
The 940 Access Denied Errors apply to all of our main pages plus....
Is it likely that the hacking caused the Access Denied errors and is there a clear way to repair these errors?
Any advice would be very welcome.
Thanks,
Colin
-
I am also got this same message & after this google is de-indexed my all top pages & my sites dropped very significantly in SERP.
I don't know why this google is sending these messages
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why do SEO agencies ask for access to our Google Search Console and Google Tag Manager?
What do they need GTM for? And what is the use case for setting up Google Search Console?
Intermediate & Advanced SEO | | NBJ_SM0 -
Optimizing A Homepage URL That Is Only Accessible To Logged In Users
I have a client who has a very old site with lots and lots of links to it. The site offers www.examplesite.com/loggedin as the homepage to logged in users. So, once you're logged in, you can't get back to examplesite.com anymore (unless you log out) and are instead given /loggedin as your new personalized homepage. The problem is that many users over time who linked to the site linked to the site they saw after they signed up and were logged in.... www.examplesite.com/loggedin. So, there's all these inbound links going to a page that is inaccessible to non-logged-in users. Thus linking to nowheresville. One idea is to fire off a 301 to non-logged in users, forwarding them to the homepage. Thus capturing much of that stranded link juice. Honestly, I'm not 100% sure you can fire off a server code conditioned on if they are logged in or not. I imagine you can, but don't know that for a technical fact. Another idea is to offer some content on /loggedin that is right now mostly currently blank, except for an offer to sign in. Which do you think is better and why? Thanks... Mike
Intermediate & Advanced SEO | | 945010 -
Crawl and Indexation Error - Googlebot can't/doesn't access specific folders on microsites
Hi, My first time posting here, I am just looking for some feedback on a indexation issue we have with a client and any feedback on possible next steps or items I may have overlooked. To give some background, our client operates a website for the core band and a also a number of microsites based on specific business units, so you have corewebsite.com along with bu1.corewebsite.com, bu2.corewebsite.com. The content structure isn't ideal, as each microsite follows a structure of bu1.corewebsite.com/bu1/home.aspx, bu2.corewebsite.com/bu2/home.aspx and so on. In addition to this each microsite has duplicate folders from the other microsites so bu1.corewebsite.com has indexable folders bu1.corewebsite.com/bu1/home.aspx but also bu1.corewebsite.com/bu2/home.aspx the same with bu2.corewebsite.com has bu2.corewebsite.com/bu2/home.aspx but also bu2.corewebsite.com/bu1/home.aspx. Therre are 5 different business units so you have this duplicate content scenario for all microsites. This situation is being addressed in the medium term development roadmap and will be rectified in the next iteration of the site but that is still a ways out. The issue
Intermediate & Advanced SEO | | ImpericMedia
About 6 weeks ago we noticed a drop off in search rankings for two of our microsites (bu1.corewebsite.com and bu2.corewebsite.com) over a period of 2-3 weeks pretty much all our terms dropped out of the rankings and search visibility dropped to essentially 0. I can see that pages from the websites are still indexed but oddly it is the duplicate content pages so (bu1.corewebsite.com/bu3/home.aspx or (bu1.corewebsite.com/bu4/home.aspx is still indexed, similiarly on the bu2.corewebsite microsite bu2.corewebsite.com/bu3/home.aspx and bu4.corewebsite.com/bu3/home.aspx are indexed but no pages from the BU1 or BU2 content directories seem to be indexed under their own microsites. Logging into webmaster tools I can see there is a "Google couldn't crawl your site because we were unable to access your site's robots.txt file." This was a bit odd as there was no robots.txt in the root directory but I got some weird results when I checked the BU1/BU2 microsites in technicalseo.com robots text tool. Also due to the fact that there is a redirect from bu1.corewebsite.com/ to bu1.corewebsite.com/bu4.aspx I thought maybe there could be something there so consequently we removed the redirect and added a basic robots to the root directory for both microsites. After this we saw a small pickup in site visibility, a few terms pop into our Moz campaign rankings but drop out again pretty quickly. Also the error message in GSC persisted. Steps taken so far after that In Google Search Console, I confirmed there are no manual actions against the microsites. Confirmed there is no instances of noindex on any of the pages for BU1/BU2 A number of the main links from the root domain to microsite BU1/BU2 have a rel="noopener noreferrer" attribute but we looked into this and found it has no impact on indexation Looking into this issue we saw some people had similar issues when using Cloudflare but our client doesn't use this service Using a response redirect header tool checker, we noticed a timeout when trying to mimic googlebot accessing the site Following on from point 5 we got a hold of a week of server logs from the client and I can see Googlebot successfully pinging the site and not getting 500 response codes from the server...but couldn't see any instance of it trying to index microsite BU1/BU2 content So it seems to me that the issue could be something server side but I'm at a bit of a loss of next steps to take. Any advice at all is much appreciated!0 -
Moving site to new domain without access to redirect from old to new. How can I do this with as little loss to SERP results as possible?
I've been hired to build a new site for a customer. They were duped by some shady characters at goglupe.com (If you can reach them, tell them they are rats--phone is disconnected, address is a comedy club on Mission in SF). Glupe owns the domain name and would not transfer or give FTP access prior to dropping off the face of the earth. The customer doesn't want to chase after them with lawyers, so we are moving on. New domain, new site with much of the same content as previous site. All that I have access to is the old wordpress site. I plan to build the new site, then remove all pages/posts from the old site. Is there anything I can do to salvage the current page 1 ranking? Obviously, the new domain will take some time to get back there. Just hoping to avoid any pitfalls or penalties if I can. If I had complete access, I would follow all the standard guidelines. But I don't. Any thoughts? Thanks! Chris
Intermediate & Advanced SEO | | c_estep_tcbguy0 -
How to 301 Redirect /page.php to /page, after a RewriteRule has already made /page.php accessible by /page (Getting errors)
A site has its URLs with php extensions, like this: example.com/page.php I used the following rewrite to remove the extension so that the page can now be accessed from example.com/page RewriteCond %{REQUEST_FILENAME}.php -f
Intermediate & Advanced SEO | | rcseo
RewriteRule ^(.*)$ $1.php [L] It works great. I can access it via the example.com/page URL. However, the problem is the page can still be accessed from example.com/page.php. Because I have external links going to the page, I want to 301 redirect example.com/page.php to example.com/page. I've tried this a couple of ways but I get redirect loops or 500 internal server errors. Is there a way to have both? Remove the extension and 301 the .php to no extension? By the way, if it matters, page.php is an actual file in the root directory (not created through another rewrite or URI routing). I'm hoping I can do this, and not just throw a example.com/page canonical tag on the page. Thanks!0 -
Domain switch planned - new domain accessible - until the switch: redirect from new to old domain with 307?
Hi there, We are going to switch our local domain oldsite.at to newsite.com in November. As our IT department wants to use the newsite.com already for email traffic till then, the domain newsite.com has to be accessible for public and currently shows the default Apache page without useful content. The old domain has quite some trust, the new domain is a first time registered domain (not known by search engines yet and no published anyhow). The domain was parked till now. I am aware of the steps to take for the switch itself, but: **what to do with the newsite.com domain until everything is prepared for the switch? **I suppose users or search engines find the domain and as there is no useful information available it harms us already. My idea was to 307 redirect newsite.com to the oldsite.at but the concern is that this causes problems as soon as we switch the domain and redirecting with 301 from oldsite.at to newsite.com? Do you have any objections or other recommendations? Thank you a lot in advance.
Intermediate & Advanced SEO | | comicron0 -
Google Reconsideration - Denied for the Third Time
I have been in the process of trying to get past a "link scheme" penalty for just over a year. I took on the client in April 2012, they had received their penalty in February of 2012 before i started. Since then we have been trying to manually remove links, contact webmasters for link removal, blocking over 40 different domains via the disavow tool and requesting reconsideration multiple times. All i get in return "Site violates Google's quality guidelines." So we regrouped and did some more research to find that about 90% of the offending spam links pointed to only 3 pages of the website so we decided to just delete the pages, display a 404 error in their place and create new pages with new URLs. At first everything was looking good, the new pages were ranking and receiving page authority and the old pages were gone from the indexes. So we resubmitted for reconsideration for the third time and we got the same exact response! I don't know what else to do? I did everything i could think of with the exception of deleting the whole site. Any advice would be greatly appreciated. Regards - Kyle
Intermediate & Advanced SEO | | kchandler0 -
Can I Improve Organic Ranking by Restrict Website Access to Specific IP Address or Geo Location?
I am targeting my website in US so need to get high organic ranking in US web search. One of my competitor is restricting website access to specific IP address or Geo location. I have checked multiple categories to know more. What's going on with this restriction and why they make it happen? One of SEO forum is also restricting website access to specific location. I can understand that, it may help them to stop thread spamming with unnecessary Sign Up or Q & A. But, why Lamps Plus have set this? Is there any specific reason? Can I improve my organic ranking? Restriction may help me to save and maintain user statistic in terms of bounce rate, average page views per visit, etc...
Intermediate & Advanced SEO | | CommercePundit1