How can I prevent sh404SEF Anti-flood control from blocking SEOMoz?
-
I'm using sh404SEF on my Joomla 1.5 website. Last week, I activated the security functions of the tool, which includes an anti-flood control feature. This morning when I looked at my new crawl statistics in SEOMoz, I noticed a significant drop in the number of webpages crawled, and I'm attributing that to the security configurations that I made earlier in the week.
I'm looking for a way to prevent this from happening so the next crawl is accurate. I was thinking of using sh404SEFs "UserAgent white list" feature. Does SEOMoz have a UserAgent string that I could try adding to my white list? Is this what you guys recommend as a solution to this problem?
-
Frankly speaking there is no reason for you to keep the Security Function switch ON. I have around 20 websites done on Joomla CMS and all of them packed with sh404SEF since I found it the most accurate and bug free SEF component for Joomla. And during all this time I have been using it, the Security Function has been Switched OFF. Same I will recommend for you, it is useless.
And remember the main purpose of sh404 is to make your website more SEO friendly but not to prevent from hacker attacks. So it's not an security component so far and still it might have a lot of bugs and mistakes.
BTW I read somewhere in INTERNET that many people had the same problem like you, but with different robots, not only SEOmoz. So you should be also careful because it might make a negative impact on other search engines.
I advise: Turn this sh*t OFF ;)))
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can Anybody Understand This ?
Hey guyz,
Technical SEO | | atakala
These days I'm reading the paperwork from sergey brin and larry which is the first paper of Google.
And I dont get the Ranking part which is: "Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word. Google considers each hit to be one of several different types (title, anchor, URL, plain text large font, plain text small font, ...), each of which has its own type-weight. The type-weights make up a vector indexed by type. Google counts the number of hits of each type in the hit list. Then every count is converted into a count-weight. Count-weights increase linearly with counts at first but quickly taper off so that more than a certain count will not help. We take the dot product of the vector of count-weights with the vector of type-weights to compute an IR score for the document. Finally, the IR score is combined with PageRank to give a final rank to the document. For a multi-word search, the situation is more complicated. Now multiple hit lists must be scanned through at once so that hits occurring close together in a document are weighted higher than hits occurring far apart. The hits from the multiple hit lists are matched up so that nearby hits are matched together. For every matched set of hits, a proximity is computed. The proximity is based on how far apart the hits are in the document (or anchor) but is classified into 10 different value "bins" ranging from a phrase match to "not even close". Counts are computed not only for every type of hit but for every type and proximity. Every type and proximity pair has a type-prox-weight. The counts are converted into count-weights and we take the dot product of the count-weights and the type-prox-weights to compute an IR score. All of these numbers and matrices can all be displayed with the search results using a special debug mode. These displays have been very helpful in developing the ranking system. "0 -
Why blocking a subfolder dropped indexed pages with 10%?
Hy Guys, maybe you can help me to understand better: on 17.04 I had 7600 pages indexed in google (WMT showing 6113). I have included in the robots.txt file, Disallow: /account/ - which contains the registration page, wishlist, etc. and other stuff since I'm not interested to rank with registration form. on 23.04 I had 6980 pages indexed in google (WMT showing 5985). I understand that this way I'm telling google I don't want that section indexed, by way so manny pages?, Because of the faceted navigation? Cheers
Technical SEO | | catalinmoraru0 -
Controlling PageRank Flow Question
One of my competitors rose above me drastically, above everyone actually. His website has a sidebar on the homepage, but when you click a post it leads to a full width page of his content - with no sidebar at all. The only button on page is HOME. My site has the sidebar on all pages, meaning juice is flowing around in all directions. Would it be smarter for me to remove my sidebar as well? In theory, this would create a boost in rankings correct?
Technical SEO | | PrivatePartners0 -
Blocked URL's by robots.txt
In Google Webmaster Tools shows me 10,936 Blocked URL's by robots.txt and it is very strange when you go to the "Index Status" section where shows that since April 2012 robots.txt blocked many URL's. You can see more precise on the image attached (chart WMT) I can not explain why I have blocked URL's ? because I have nothing in robots.txt.
Technical SEO | | meralucian37
My robots.txt is like this: User-agent: * I thought I was penalized by Penguin in April 2012 because constantly i'am losing visitors now reaching over 40%. It may be a different penalty? Any help is welcome because i'm already so saturated. Mera robotstxt.jpg0 -
Blocking https from being crawled
I have an ecommerce site where https is being crawled for some pages. Wondering if the below solution will fix the issue www.example.com will be my domain In the nav there is a login page www.example.com/login which is redirecting to the https://www.example.com/login If I just disallowed /login in the robots file wouldn't it not follow the redirect and index that stuff? The redirect part is what I am questioning.
Technical SEO | | Sean_Dawes0 -
Can I turn off Google site links?
I thought at one time I had turned off the option to have Google sitelinks. I did this so that each of our pages that had a strong presence would occupy a unique slot on the first and second page of Google. This was important to us as we were battling some reputation management issues and trying to push out negative listings from the front page. Recently I noticed sitelinks were back up and when going into Google Webmaster Tools, I could figure out how to opt out of them. Any suggestions?
Technical SEO | | BRConsulting0 -
How can you manually diagnose the canonical problem
Good Monrning from snow dusted minus 3 degrees C Wetherby UK... Is there a quick way to diagnose wether or not a website has a canonical problem or not? So far Ive been doing this for example: Typing a full web address then one without the w's and seeing if a 301 redirect has been set up. But I'm not confident this is the best way to diagnose if there is a canonical problem with a site. I would like to ad that I want to see if a canonical problem exists with any site and webmanster tools is not available. Any insights welcome 🙂
Technical SEO | | Nightwing1