Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to find all crawlable links on a particular page?
-
Hi! This might sound like a newbie question, but I'm trying to find all crawlable links (that google bot sees), on a particular page of my website. I'm trying to use screaming frog, but that gives me all the links on that particular page, AND all subsequent pages in the given sub-directory. What I want is ONLY the crawlable links pointing away from a particular page. What is the best way to go about this? Thanks in advance.
-
Thanks for sharing this information Thomas. Appreciate your time and help here. Regards.
-
I understand yes are referred that is a parameter or how far from home here's some information on a tool I'm using right now
http://www.internetmarketingninjas.com/seo-tools/google-sitemap-generator/
here is an HTML file of the results however you can see the how far from home on the left hand side I suggest you run the tool yourself so you can see the full results
Using the IMN Google Site Map Generator
Links are critically important to webpages, not only for connecting to other, related pages to help end users find the information they want, but in optimizing the pages for SEO. The Find Broken Links, Redirects & Google Sitemap Generator Free Tool allows webmasters and search engine optimizers to check the status of both external links and internal links on an entire website. The resulting report generated by the Google sitemap generator tool will give webmasters and SEOs insight to the link structure of a website, and identify link redirects and errors, all of which help in planning a link optimization strategy. We always offer the downloadable results and the sitemap generator free for everyone.
Get started
To start with the free sitemap generator, type (or paste) the full home page URL of the website you want scanned. Select the number of pages you want to scan (up to 500, up to 1,000, or up to 10,000). Note that the job starts immediately and runs in real time. For larger sites containing numerous pages, the process can take up to 30 minutes to crawl and gather data on 1,000 pages (and longer still for very large sites). You can set the Google sitemap generator tool to send you an email once the crawl is completed and the data report is prepared. The online sitemap generator offers several options and also acts as an XML sitemap generator or an HTML sitemap generator.
Note that the results table data of the online sitemap generator is interactive. Most of the data items are linked, either to the URLs referenced or to details about the data. For most cells that contain non-URL data, pause the mouse over the cell to see the full results.
Results Bar
When the tool starts, a results bar appears at the top of the page showing the following information:
- Status of the tool (Crawling or Done)
- Number of Internal URLs crawled
- Number of External links found
- Number of Internal HTTP Redirects found
- Number of External HTTP Redirects found
- Number of Internal HTTP error codes found
- Number of External HTTP error codes found
For those who need sitemaps provided by either an HTML sitemap generator or an XML sitemap generator,
there are corresponding options offered here. Also shown are the following:- Download XML Sitemap button
- Download tool results in Excel format
- Download tool results in HTML format
Lastly, if you love the free sitemap generator tool, you can tell the world by clicking any of the following social media buttons:
- Facebook Like
- Google+
Email notification
Next, you can submit your email address to have a copy of the report emailed to you if you choose not to wait for it to finish crawling. We offer this feature as well as the sitemap generator free to all users.
Tool results data
When results are ready, the HTML sitemap generator will organize the data into six tables:
- Internal links
- External links
- Internal errors (a subset of Internal Links)
- Internal redirects (another subset of Internal Links)
- External errors (a subset of External Links)
- External redirects (another subset of External Links)
The table data is typically linked to either page URLs or to details about the data. Click on column headers to sort the results.
1Internal Links table
The Internal links table created by the XML sitemap generator includes the following data fields:
- URLs crawled on the site
- Link to The On Page Optimization Analysis Free SEO Tool for that URL
- URL’s level from the domain root
- URL’s returned HTTP status code
- Number of internal links the URL has within the site (click to see the list of URLs)
- Link text used for the URL
- Number of internal links on the page (click to see the list of URLs)
- Number of external links on the page (click to see the list of URLs)
- Size of the page on kilobytes (click to see page load speed test results for this URL from Google)
- Link to the Check Image Sizes, Alt Text, Header Checks and More Free SEO Tool for that URL
- The tag text from the URL’s page
- The description tag text from the URL’s page
- The keywords tag text from the URL’s page
- Contents, if used, of the anchor tag’s “rel=” attribute
2External Links table
The External links table includes the following data fields:
- URL’s returned HTTP status code
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- External URL used in the link
- Link text used for the URL
- Internal page URL on which the link was first found
3Internal HTTP code errors table
The Internal errors table gathers all of the pages returning HTTP code errors (4xx and 5xx level codes) in one place to help organize the effort to resolve the problems. It includes the following data fields:
- URL’s returned HTTP status code
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- Internal URL used in the link
- Link text used for the URL
- Internal page URL on which the link was first found
The Internal errors table is a subset of the Internal links table showing just those pages returning HTTP status code errors.
4Internal HTTP redirects table
The Internal redirects table combines all of the pages returning HTTP redirects in one list so you can easily review them. You should not have to rely on redirects internally. Instead, you can fix the source code containing the redirected link. This table contains the following data fields:
- URL’s returned HTTP status code (click it to go to the HTTP Response Code Checker tool)
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- Internal URL used in the link
- Link text used for the URL
- Redirect’s target URL
- Internal page URL on which the link was first found
The Internal redirects table is a subset of the Internal links table showing just those pages returning 301 and 302 HTTP status code redirects.
5External HTTP code errors table
The External errors table gathers all of the pages returning HTTP code errors (4xx and 5xx level codes) in one place to help organize the effort to resolve the problems. It includes the following data fields:
- URL’s returned HTTP status code (click it to go to the HTTP Response Code Checker tool)
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- Internal URL used in the link
- Link text used for the URL
- Redirect’s target URL
- Internal page URL on which the link was first found
The External errors table is a subset of the External links table showing just those pages returning HTTP status code errors.
6External HTTP redirects table
The External redirects table combines all of the pages returning HTTP redirects in one list so you can easily review them. As the redirect to the targeted page does not affect your page, fix these URLs is a lower priority. This table contains the following data fields:
- URL’s returned HTTP status code (click it to go to the HTTP Response Code Checker tool)
- Number of times that URL is linked to from within the site (click to see the list of affected URLs)
- External URL used in the link
- Link text used for the URL
- Redirect’s target URL
- Internal page URL on which the link was first found
The External redirects table is a subset of the External links table showing just those pages returning 301 and 302 HTTP status code redirects.
-
Hi Thomas! When I say 1 click, I mean all links that can directly be reached from www.wishpicker.com. For example
wishpicker.com/gifts-for can be reached directly from wishpicker.com
wishpicker.com/gifts-for/boyfriend cannot be reached directly from wishpicker.com. I would first need to go to wishpicker.com/gifts-for, and then go to wishpicker.com/gifts-for/boyfriend. So wishpicker.com/gifts-for is 1 click away, and wishpicker.com/gifts-for/boyfriend is 2 clicks away from wishpicker.com.
I am looking to crawl all links that are only 1 click away. Thanks for your help here. Really appreciate it.
-
when you say one click away are you talking about a parameter?
I will run this through screaming frog and a couple other tools and see if I can get your answer.
-
Hi Thomas
Thanks for your response. Here is my website: www.wishpicker.com
What I am looking for is all the links present only 1 click away from the page www.wishpicker.com (both internal and external).
Performing a crawl with screaming frog is giving me all links (1, 2, 3, 4, and more clicks away). Not sure how to limit the crawl to show links that are only 1 click away, and exclude links that are 2 or more clicks away from this page.
Look forward to your response.
Thanks!
-
Hi,
Screaming frog does in fact show you the links that would be considered external links. Here is a great guide.
http://www.seerinteractive.com/blog/screaming-frog-guide
If you look at the external part of Screaming frog you'll find what you're looking for however you may also do this with
using either the campaign tool or the browser plug-in.
I would suggest reading the seer interactive guide and sticking with screaming frog it is an outstanding tool.
Here are some other tools which I hope will help you if that is not the route you wish to go.
If you could post a photograph of what you are looking for or what you mean by it only showing you the internal link count I know what you mean by that I just want to see what screen you're looking on to get the The answer you're looking for.
Here are some more tools that will allow you to scan up to 1000 pages of your website for free and will tell you the information you're looking for.
http://www.internetmarketingninjas.com/tools
if you cannot find what you're looking for in their you might want to try
http://www.quicksprout.com/2013/02/04/how-to-perform-a-seo-audit-free-5000-template-included/
distilled.net/U might be the best way to find out these types of things however it is a complete search engine optimization training course.
Sincerely,
Thomas
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to find orphan pages
Hi all, I've been checking these forums for an answer on how to find orphaned pages on my site and I can see a lot of people are saying that I should cross check the my XML sitemap against a Screaming Frog crawl of my site. However, the sitemap is created using Screaming Frog in the first place... (I'm sure this is the case for a lot of people too). Are there any other ways to get a full list of orphaned pages? I assume it would be a developer request but where can I ask them to look / extract? Thanks!
Technical SEO | | KJH-HAC1 -
How can I stop a tracking link from being indexed while still passing link equity?
I have a marketing campaign landing page and it uses a tracking URL to track clicks. The tracking links look something like this: http://this-is-the-origin-url.com/clkn/http/destination-url.com/ The problem is that Google is indexing these links as pages in the SERPs. Of course when they get indexed and then clicked, they show a 400 error because the /clkn/ link doesn't represent an actual page with content on it. The tracking link is set up to instantly 301 redirect to http://destination-url.com. Right now my dev team has blocked these links from crawlers by adding Disallow: /clkn/ in the robots.txt file, however, this blocks the flow of link equity to the destination page. How can I stop these links from being indexed without blocking the flow of link equity to the destination URL?
Technical SEO | | UnbounceVan0 -
Find all external 404 errors/links?
Hi All, We have recently discovered a site was linking to our site but it was linking to an incorrect url, resulting in a 404 error. We had only found this by pure chance and wondered if there was a tool out there that will tell us when a site is linking to an incorrect url on our site? Thanks 🙂
Technical SEO | | O2C0 -
Is the Authority of Individual Pages Diluted When You Add New Pages?
I was wondering if the authority of individual pages is diluted when you add new pages (in Google's view). Suppose your site had 100 pages and you added 100 new pages (without getting any new links). Would the average authority of the original pages significantly decrease and result in a drop in search traffic to the original pages? Do you worry that adding more pages will hurt pages that were previously published?
Technical SEO | | Charlessipe0 -
Correct linking to the /index of a site and subfolders: what's the best practice? link to: domain.com/ or domain.com/index.html ?
Dear all, starting with my .htaccess file: RewriteEngine On
Technical SEO | | inlinear
RewriteCond %{HTTP_HOST} ^www.inlinear.com$ [NC]
RewriteRule ^(.*)$ http://inlinear.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://inlinear.com/ [R=301,L] 1. I redirect all URL-requests with www. to the non www-version...
2. all requests with "index.html" will be redirected to "domain.com/" My questions are: A) When linking from a page to my frontpage (home) the best practice is?: "http://domain.com/" the best and NOT: "http://domain.com/index.php" B) When linking to the index of a subfolder "http://domain.com/products/index.php" I should link also to: "http://domain.com/products/" and not put also the index.php..., right? C) When I define the canonical ULR, should I also define it just: "http://domain.com/products/" or in this case I should link to the definite file: "http://domain.com/products**/index.php**" Is A) B) the best practice? and C) ? Thanks for all replies! 🙂
Holger0 -
Can you 301 redirect a page to an already existing/old page ?
If you delete a page (say a sub department/category page on an ecommerce store) should you 301 redirect its url to the nearest equivalent page still on the site or just delete and forget about it ? Generally should you try and 301 redirect any old pages your deleting if you can find suitable page with similar content to redirect to. Wont G consider it weird if you say a page has moved permenantly to such and such an address if that page/address existed before ? I presume its fine since say in the scenario of consolidating departments on your store you want to redirect the department page your going to delete to the existing pages/department you are consolidating old departments products into ?
Technical SEO | | Dan-Lawrence0 -
Do I need to add canonical link tags to pages that I promote & track w/ UTM tags?
New to SEOmoz, loving it so far. I promote content on my site a lot and am diligent about using UTM tags to track conversions & attribute data properly. I was reading earlier about the use of link rel=canonical in the case of duplicate page content and can't find a conclusive answer whether or not I need to add the canonical tag to these pages. Do I need the canonical tag in this case? If so, can the canonical tag live in the HEAD section of the original / base page itself as well as any other URLs that call that content (that have UTM tags, etc)? Thank you.
Technical SEO | | askotzko1 -
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
Technical SEO | | surveygizmo0