SEOBook RankChecker Works at one location but not another
-
I use SEOBook's RankChecker to do spot checking of keywords for clients or for potential clients. I like the tool quite a bit, but I've noticed that it recently stopped working at our office (instead of rankings, I just get a series of dashes) but works fine from my home computer. I'm thinking that it may have to do with our company's firewall.
Anyone have any thoughts?
-
Aaron,
Thanks for your response. I have installed what I believe is the latest version (1.8.21). I have tried to turn off Google Instant, although this isn’t as easy as I thought. They removed the ability to turn it off and replaced it with a box where you can check “never show instant results”. Even when you do this, you still see instant results.
I have the delay between queries set to 12 seconds, so I don’t think that is the issue. My next steps will be to check the firewall and then delete my current extensions in Firefox to see if any conflict.
Eric
-
- Do you have the most recent version of the extension installed at your work?
- Have you turned off Google Instant?
- Do Google results render fine, or is there a captcha on them? If captcha, then have you increased the delay between queries?
- It could also be an issue with the Firewall or security software, but these tend to be much less common of issues than issues of having no delay time between searches or there being issues with Google Instant.
- Another issue could be another conflicting extension in Firefox.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can i use multiple domain for one website?
Dear Experts, I want to make an online store like www.abc.com and I have a plan to buy 3 more domains like www.abc.co.uk, www.abc.com.au, www.abc.ae and redirect all domain to main domain which is www.abc.com but I want If somebody search from UK so He/she will see www.abc.co.uk domain in search result and If somebody search from UAE so He/she will see www.abc.ae domain in search result and same for other extension. How can I safe from duplication multiple domain for one website What would be the SEO strategy should i follow I am hoping a positive reply from your side Thanks
Technical SEO | | jfdagborrbg0 -
Having issues with Redirects not working and old links on SERP
We just migrated a site and built a redirect map for Site A to B. If there were old redirects made for site A that weren't pulled when pulling internal links for site A, do those also need to be redirected to site B to eliminate a Redirect chain? Cannot figure out why old links are still showing up, does it take a few days for google to figure out these are not real pages?
Technical SEO | | Ideas-Collide0 -
Duplicate content when working with makes and models?
Okay, so I am running a store on Shopify at the address https://www.rhinox-group.com. This store is reasonably new, so being updated constantly! The thing that is really annoying me at the moment though, is I am getting errors in the form of duplicate content. This seems to be because we work using the machine make and model, which is obviously imperative, but then we have various products for each machine make and model. Have we got any suggestions on how I can cut down on these errors, as the last thing I want is being penalised by Google for this! Thanks in advance, Josh
Technical SEO | | josh.sprakes1 -
If Google's index contains multiple URLs for my homepage, does that mean the canonical tag is not working?
I have a site which is using canonical tags on all pages, however not all duplicate versions of the homepage are 301'd due to a limitation in the hosting platform. So some site visitors get www.example.com/default.aspx while others just get www.example.com. I can see the correct canonical tag on the source code of both versions of this homepage, but when I search Google for the specific URL "www.example.com/default.aspx" I see that they've indexed that specific URL as well as the "clean" one. Is this a concern... shouldn't Google only show me the clean URL?
Technical SEO | | JMagary0 -
Use of Location Folders
I'd like to understand the pro's and con's of using a location subfolder as an SEO strategy (example: http://sqmedia.us/Dallas/content-marketing.html), where the /Dallas folder is holding all of my keyword rich page titles. The strategy is to get local-SEO benefits from the use of the folder titled /Dallas (a folder which is unnecessary in the over all structure of this site), but how much is this strategy taking away from the page-title keyword effectiveness?
Technical SEO | | sqmedia0 -
Different IP's in one Server
Hi, I just want to ask if there is no bad effect in SEO if we do have different websites that has different IP address but has shared in only 1 server? Thank you
Technical SEO | | TirewebMarketing0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Destination URL in SERPs keeps changing and I can't work out why.. Help.
I am befuddled as to why our destination URL in SERPs keeps changing oak furniture was nicely returning http://www.thefurnituremarket.co.uk/oakfurniture.asp then I changed something yesterday I did 2 things. published a link to that on facebook as part of a competition. redirected dynamic pages to the static URL for oak furniture.. Now for oak furniture the SERPs in GG UK is returning our home page as the most relevant landing page.. Any Idea why? I'm leaning to an onpage issue than posting on FB.. Thoughts?
Technical SEO | | robertrRSwalters0