Unreachable Pages
-
Hi All
Is there a tool to check a website if it has stand alone unreachable pages?
Thanks for helping
-
The only possible way I can think of is if the other person's site has an xml sitemap that is accurate, complete, and was generated by the website's system itself. (As is often created by plugins on WordPress sites, for example)
You could then pull the URLs from the xml into the spreadsheet as indicated above, add the URLs from the "follow link" crawl and continue from there. If a site has an xml sitemap it's usually located at www.website.com/sitemap.xml. Alternately, it's location may be specified in the site's robots.txt file.
The only way this can be done accurately is if you can get a list of all URLs natively created by the website itself. Any third-party tool/search engine is only going to be able to find pages by following links. And the very definition of the pages you're looking for is that they've never been linked. Hence the challenge.
Paul
-
Thanks Paul! Is there any way to do that for another persons site, any tool?
-
The only way I can see accomplishing this is if you have a fully complete sitemap generated by your own website's system (ie not created by a third-party tool which simply follow links to map your site)
Once you have the full sitemap, you'll also need to do a crawl using something like Screaming Frog to capture all the pages it can find using the "follow link" method.
Now you should have a list of ALL the pages on the site (the first sitemap) and a second list of all the pages that can be found through internal linking. Load both into a spreadsheet and eliminate all the duplicate URLs. What you'll be left with "should" be the pages that aren't connected by any links - ie the orphaned pages.
You'll definitely have to do some manual cleanup in this process to deal with things like page URLs that include dynamic variables etc, but it should give a strong starting point. I'm not aware of any tool capable of doing this for you automatically.
Does this approach make sense?
Paul
-
pages without any internal links to them
-
Do you mean orphaned pages without any internal links to them? Or pages that are giving a bad server header code?
-
But I want to find the stand alone pages only. I don't want to see the reachable pages. Can any one help?
-
If the page is indexed you can just place the site url in quotes "www.site.com" in google and it will give you all the pages that has this url on it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our protected pages 302 redirect to a login page if not a member. Is that a problem for SEO?
We have a membership site that has links out in our unprotected pages. If a non-member clicks on these links it sends a 302 redirect to the login / join page. Is this an issue for SEO? Thanks!
Technical SEO | | rimix1 -
Page Content
Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. Unfortunately we have understood by now that many customers have entered exactly same title for their listings which has caused us having hundreds of similar page title. We have corrected all the pages which had similar meta tag and duplicate page title tags. We have also inserted controls to our software to prevent generating duplicate page title tags or meta tags. But also the page content quality not very good because page content added by customer.(example: http://www.enakliyat.com.tr/detaylar/evden-eve--6001) What should I do. Please help me.
Technical SEO | | iskq0 -
Why do they rank the home page?
We are trying to rank for the key word Motorcycle Parts. We have moved up to page 2 over the past couple months; however, google is ranking our home page not our http://www.rockymountainatvmc.com/s/49/61/Motorcycle-Parts page that is for motorcycle parts. We are working on internal linking to help point the right signals too. Any other thoughts? ( we have new content written to put in as well we just have to wait for an issue to be fixed before we can put it in)
Technical SEO | | DoRM0 -
Wordpress duplicate pages
I am using Wordpress and getting duplicate content Crawler error for following two pages http://edustars.yourstory.in/tag/edupristine/ http://edustars.yourstory.in/tag/education-startups/ These two are tags which take you to the same page. All the other tags/categories which take you to the same page or have same title are also throwing errors, how do i fix it?
Technical SEO | | bhanu22170 -
Getting a citation page indexed
Howdy mozzers, I have a citation on a .govt domain with 2 links pointing to my site. The page is not indexed by Google, bing or yahoo. URL; http://www.familyservices.govt.nz/directory/viewprovider.htm?id=17077 I have tried getting the paged indexed by building bookmark links to it. I have tweeted the url and gotten a few re-tweets for it. But no luck. The page has got no nofollow meta tag. Other listings have been indexed by google. Could someone please advise on means to help me get the page indexed? A strategy that I have not yet tried is submitting a sitemap that includes the external url as I am not sure if it is possible to include url's not part of my domain. Any advice, help would be greatly appreciated. viva le SEOmoz Thanks
Technical SEO | | ihms1 -
Duplicate Page Content and Title for product pages. Is there a way to fix it?
We we're doing pretty good with our SEO, until we added product listing pages. The errors are mostly Duplicate Page Content/Title. e.g. Title: Masterpet | New Zealand Products MasterPet Product page1 MasterPet Product page2 Because the list of products are displayed on several pages, the crawler detects that these two URLs have the same title. From 0 Errors two weeks ago, to 14k+ errors. Is this something we could fix or bother fixing? Will our SERP ranking suffer because of this? Hoping someone could shed some light on this issue. Thanks.
Technical SEO | | Peter.Huxley590 -
Can I use canonical tags to merge property map pages and availability pages to their counterpart overview pages?
I have a property website, for each property are 4-5 tabs each with their own URL, these pages include the overview page which is content rich, and auxilliary pages such as maps, availability, can I use a canonical tag to merge the tabs with very little content to their corresponding overview page which is content rich? I.e. www.mywebsite.co.uk/property-1/overview This page has tabs for map, town info, availability which all have their own url i.e. www.mywebsite.co.uk/property-1/map
Technical SEO | | assertive-media
www.mywebsite.co.uk/property-1/availability
www.mywebsite.co.uk/property-1/towninfo Because these auxilary pages do not contain much content can I place a canonical tag in them pointing back to the content rich overview page at www.mywebsite.co.uk/property-1/overview?0