Unreachable Pages
-
Hi All
Is there a tool to check a website if it has stand alone unreachable pages?
Thanks for helping
-
The only possible way I can think of is if the other person's site has an xml sitemap that is accurate, complete, and was generated by the website's system itself. (As is often created by plugins on WordPress sites, for example)
You could then pull the URLs from the xml into the spreadsheet as indicated above, add the URLs from the "follow link" crawl and continue from there. If a site has an xml sitemap it's usually located at www.website.com/sitemap.xml. Alternately, it's location may be specified in the site's robots.txt file.
The only way this can be done accurately is if you can get a list of all URLs natively created by the website itself. Any third-party tool/search engine is only going to be able to find pages by following links. And the very definition of the pages you're looking for is that they've never been linked. Hence the challenge.
Paul
-
Thanks Paul! Is there any way to do that for another persons site, any tool?
-
The only way I can see accomplishing this is if you have a fully complete sitemap generated by your own website's system (ie not created by a third-party tool which simply follow links to map your site)
Once you have the full sitemap, you'll also need to do a crawl using something like Screaming Frog to capture all the pages it can find using the "follow link" method.
Now you should have a list of ALL the pages on the site (the first sitemap) and a second list of all the pages that can be found through internal linking. Load both into a spreadsheet and eliminate all the duplicate URLs. What you'll be left with "should" be the pages that aren't connected by any links - ie the orphaned pages.
You'll definitely have to do some manual cleanup in this process to deal with things like page URLs that include dynamic variables etc, but it should give a strong starting point. I'm not aware of any tool capable of doing this for you automatically.
Does this approach make sense?
Paul
-
pages without any internal links to them
-
Do you mean orphaned pages without any internal links to them? Or pages that are giving a bad server header code?
-
But I want to find the stand alone pages only. I don't want to see the reachable pages. Can any one help?
-
If the page is indexed you can just place the site url in quotes "www.site.com" in google and it will give you all the pages that has this url on it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Reviews on Product Page or Separated
Good Afternoon We currently have our individual product information pages set-up with a link through to a separate review page optimised for the term "Product A Reviews" I was reading about structured data and if I read correctly, the reviews should sit with the marked up product data so I was wondering whether to merge them back into one page. We have many reviews so the review pages are paginated in blocks of 25 My options are: Leave as it is, product info page and separate review page Merge the review content back in to the main page and have the pagination work on that page Include the first 25 reviews on the product info page then when user clicks through to page 2, 3 etc they're taken to the separated review page. In that way the product page would regularly get new content and we can still have a page specifically targeted for reviews. From the users point of view, they probably aren't even aware they're being taken to a separate reviews page so with that in mind as I'm typing this maybe they should be one page again
Technical SEO | | Ham19790 -
Home Page Ranking Instead of Service Pages
Hi everyone! I've noticed that many of our clients have pages addressing specific queries related to specific services on their websites, but that the Home Page is increasingly showing as the "ranking" page. For example, a plastic surgeon we work with has a page specifically talking about his breast augmentation procedure for Miami, FL but instead of THAT page showing in the search results, Google is using his home page. Noticing this across the board. Any insights? Should we still be optimizing these specific service pages? Should I be spending time trying to make sure Google ranks the page specifically addressing that query because it SHOULD perform better? Thanks for the help. Confused SEO :/, Ricky Shockley
Technical SEO | | RickyShockley0 -
Duplicate Page Errors
Hey guys, I'm wondering if anyone can help... Here is my issue... Our website:
Technical SEO | | TCPReliable
http://www.cryopak.com
It's built on Concrete 5 CMS I'm noticing a ton of duplicate page errors (9530 to be exact). I'm looking at the issues and it looks like it is being caused by the CMS. For instance the home page seems to be duplicating.. http://www.cryopak.com/en/
http://www.cryopak.com/en/?DepartmentId=67
http://www.cryopak.com/en/?DepartmentId=25
http://www.cryopak.com/en/?DepartmentId=4
http://www.cryopak.com/en/?DepartmentId=66 Do you think this is an issue? Is their anyway to fix this issue? It seems to be happening on every page. Thanks Jim0 -
Roger returned single page
A search query in Google shows 156 indexed results. Yet Roger has completed a crawl and returned a single page. The website is table based, but there appear to be no redirects or Javascript blocking bots so I'm unsure why Roger has under delivered. The problem is not unique to Roger. I ran the site through Screaming Frog's desktop crawler and it also returned a single page. I'm wondering if there is something in the site code I don't know to look for that is stopping Roger & Screaming Frog from crawling the site. Appreciate any insights you can offer. PS. I've read https://seomoz.zendesk.com/entries/409821-Why-Isn-t-My-Site-Being-Crawled-You-re-Not-Crawling-All-My-Pages- and don't think these suggested causes apply.
Technical SEO | | NicDale0 -
How to know what pages are 301 redirecting to me?
Hi! It is easy to know if somebody is spam linking your website, looking i.e., looking at open site explorer to analyse the links profile. But, is it possible to know if a competitor of mine is redirecting a bad domain to main with a 301 redirect, thus transfering any bad SEO reputation to me? Best Regards, Daniel
Technical SEO | | te_c0 -
Page not Accesible for crawler in on-page report
Hi All, We started using SEOMoz this week and ran into an issue regarding the crawler access in the on-page report module. The attached screen shot shows that the HTTP status is 200 but SEOMoz still says that the page is not accessible for crawlers. What could this be? Page in question
Technical SEO | | TiasNimbas
http://www.tiasnimbas.edu/Executive_MBA/pgeId=307 Regards, Coen SEOMoz.png0 -
Too many on page links for WP blog page
Hello, I have set my WP blog to a page so new posts go to that page making it the blog. On a SEOmoz campaign crawl, it says there are too many links on one page, so does this mean that as I am posting my blog posts to this page, the search engines are seeing the page as one page with links instead of the blog posts? I worry that if I continue to add more posts (which obviously I want to) the links will increase more and more, meaning that they will be discounted due to too many links. What can I do to rectify this? Many thanks in advance
Technical SEO | | mozUser14692366292850 -
What are the causes of pages desindexation?
Hello, I was wondering what can be the causes of pages desindexation by Google? A poor quality pages,...? Thank you for your answers, Jonathan
Technical SEO | | JonathanLeplang0