Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How do I complete a reverse DNS check when completing log file analysis?
-
I'm doing some log file analysis and need to run a reverse DNS check to ensure that I'm analysing logs from Google and not any imposters. Is there a command I can use in terminal to do this?
If not, whats the best way to verify Googlebot?
Thanks
-
That's awesome! Glad to know there's a bulk tool out there!
-
Hi Tyler,
Thanks for your reply. I managed to get down to 98 unique IPs and ran a bulk reverse DNS/IP Look-up using this tool:
https://www.infobyip.com/ipbulklookup.php
Thanks for your help though!
-
Hey Daniel,
If you want to verify that a user-agent is actually Googlebot, you'll want to use a log file analysis tool to aggregate all of the IP addresses associated with Googlebot. Once you have a list of IP addresses, you can perform a reverse DNS lookup to verify whether the IP addresses are actually associated with Googlebot or not.
If you're on windows/pc these steps should work:
https://www.serverintellect.com/support/dns/reverse-dns/If you're on mac try these steps:
1. open Terminal
2. type "host" + ip address
for example: "host 66.249.66.1"
3. hit enter
4. view results. For example: "1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com"If the results are from Google.com or Googlebot.com, you can be sure it's actually Google crawling your site. Unfortunately, I don't know of any faster ways to achieve these results. I'm sure there's a tool out there, I just haven't found it yet.
This might also be a good resource for you: https://support.google.com/webmasters/answer/80553?hl=en
Good luck!
-Tyler
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Temporary redirect from 302 to 301 for PNG File?
#302HTTP #temporaryredirect
Technical SEO | | Damian_Ed 0
Hi everyone, Recently I have faced a crawl issue with my media images on website. For example this page url https://intreface.com/wp-content/uploads/2022/12/Horion-screen-side-2.png has 302 HTTP Status and the recommendation is to change it 301. I have read the article on temporary redirections here:
https://moz.com/learn/seo/redirection?_ga=2.45324708.1293586627.1702571936-916254120.1702571936
but its not written here how to redirect in my HTML 1 image url not the landing page.
Screenshot 2023-12-15 at 11.02.40.png
I have messaged to MOZ Support but they recommended to go for the MOZ Community!
Screenshot 2023-12-15 at 11.06.02.png Could you assist me wit this issue please? I can reach HTTML of the necessary page and change what I need for permanent redirection but firstly I need to understand how to do that correctly.0 -
Does using a reverse proxy to make a subdomain appear as a subdirectory affect SEO?
Using a reverse proxy only makes it appear that a subdomain is really a subfolder. However, links in the end remain the same. Does this have any negative (or positive) impact on SEO? Does it make it difficult from the blog's (subdomain's) sitemap or robots.txt file to be properly read by search engines?
Technical SEO | | rodelmo41 -
Is there a limit to how many URLs you can put in a robots.txt file?
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda. Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
Technical SEO | | kcb81780 -
Why are Google search results different if you are log'd into Google or not?
I get different results when I'm log'd into my Google account associated with my website than if I'm not. The same country is occurring. So how can I rely on the google results I'm seeing? For instance my site is page 1 with the improvements I made based on SEOMOZ if I'm log'd in. Yet I'm not on the first 25 pages if I'm not logged in.
Technical SEO | | Romana0 -
Robots.txt file getting a 500 error - is this a problem?
Hello all! While doing some routine health checks on a few of our client sites, I spotted that a new client of ours - who's website was not designed built by us - is returning a 500 internal server error when I try to look at the robots.txt file. As we don't host / maintain their site, I would have to go through their head office to get this changed, which isn't a problem but I just wanted to check whether this error will actually be having a negative effect on their site / whether there's a benefit to getting this changed? Thanks in advance!
Technical SEO | | themegroup0 -
Content loc and player log tags for XML video site maps
I need a little help understanding how to create two of the required tags for a XML video site map for Google. 1. video:content_loc2.<video:player_loc< p=""></video:player_loc<></video:content_loc> Google explains their Video XML Site map requirements here:
Technical SEO | | dsexton10
www.google.com/support/webmasters/bin/answer.py?answer=80472
Using the example on this Google Web Master Help page (where they explain all six of the required tags) , here are examples of the two tags I need help with: video:content_locwww.example.com/video123.flv</video:content_loc> <video:player_loc allow_embed="yes" autoplay="ap=1">www.example.com/videoplayer.swf?video=12...video:player_loc></video:player_loc> The video I am trying to optimize is located on a page on my site:
www.mountainbikingmaine.com/races/bradbury_hawk.html
This page has an embedded Vimeo video. So I don't have the video file on my domain. It is on Vimeo. Here is source code from my page that I think provides the information I need to create the two tags that Google requires. <iframe src="<a rel=" nofollow"="" href="http://player.vimeo.com/video/24580638?title=0&byline=0&portrait=0"" target="_blank">player.vimeo.com/video/24580638?title=0&...amp;portrait=0"</a> width="400" height="533" frameborder="0"></iframe> [vimeo.com/24580638">Bradbury](<a rel=) Mountain Maine Hawk Migration Count from [vimeo.com/user3219915">dan](<a rel=) sexton Using this source from my site, can you suggest what to put in the two tags? Thanks! Dan0 -
Dynamically-generated .PDF files, instead of normal pages, indexed by and ranking in Google
Hi, I come across a tough problem. I am working on an online-store website which contains the functionlaity of viewing products details in .PDF format (by the way, the website is built on Joomla CMS), now when I search my site's name in Google, the SERP simply displays my .PDF files in the first couple positions (shown in normal .PDF files format: [PDF]...)and I cannot find the normal pages there on SERP #1 unless I search the full site domain in Google. I really don't want this! Would you please tell me how to figure the problem out and solve it. I can actually remove the corresponding component (Virtuemart) that are in charge of generating the .PDF files. Now I am trying to redirect all the .PDF pages ranking in Google to a 404 page and remove the functionality, I plan to regenerate a sitemap of my site and submit it to Google, will it be working for me? I really appreciate that if you could help solve this problem. Thanks very much. Sincerely SEOmoz Pro Member
Technical SEO | | fugu0 -
What tool do you use to check for URLs not indexed?
What is your favorite tool for getting a report of URLs that are not cached/indexed in Google & Bing for an entire site? Basically I want a list of URLs not cached in Google and a seperate list for Bing. Thanks, Mark
Technical SEO | | elephantseo3