Unreachable Pages
-
Hi All
Is there a tool to check a website if it has stand alone unreachable pages?
Thanks for helping
-
The only possible way I can think of is if the other person's site has an xml sitemap that is accurate, complete, and was generated by the website's system itself. (As is often created by plugins on WordPress sites, for example)
You could then pull the URLs from the xml into the spreadsheet as indicated above, add the URLs from the "follow link" crawl and continue from there. If a site has an xml sitemap it's usually located at www.website.com/sitemap.xml. Alternately, it's location may be specified in the site's robots.txt file.
The only way this can be done accurately is if you can get a list of all URLs natively created by the website itself. Any third-party tool/search engine is only going to be able to find pages by following links. And the very definition of the pages you're looking for is that they've never been linked. Hence the challenge.
Paul
-
Thanks Paul! Is there any way to do that for another persons site, any tool?
-
The only way I can see accomplishing this is if you have a fully complete sitemap generated by your own website's system (ie not created by a third-party tool which simply follow links to map your site)
Once you have the full sitemap, you'll also need to do a crawl using something like Screaming Frog to capture all the pages it can find using the "follow link" method.
Now you should have a list of ALL the pages on the site (the first sitemap) and a second list of all the pages that can be found through internal linking. Load both into a spreadsheet and eliminate all the duplicate URLs. What you'll be left with "should" be the pages that aren't connected by any links - ie the orphaned pages.
You'll definitely have to do some manual cleanup in this process to deal with things like page URLs that include dynamic variables etc, but it should give a strong starting point. I'm not aware of any tool capable of doing this for you automatically.
Does this approach make sense?
Paul
-
pages without any internal links to them
-
Do you mean orphaned pages without any internal links to them? Or pages that are giving a bad server header code?
-
But I want to find the stand alone pages only. I don't want to see the reachable pages. Can any one help?
-
If the page is indexed you can just place the site url in quotes "www.site.com" in google and it will give you all the pages that has this url on it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Home Page and ranking
I have a problem and i have a question on that. Many important keywords and long tail keywords are ranking with the home page url.
Technical SEO | | mazzamz
How can i enrich the home page content without having a bad result and should i try to make the google spider send to more specific page?0 -
If the order of products on a page changes each time the page is loaded, does this have a negative effect on the SEO of those pages?
Hello, a client of mine has a number of category pages that each have a list of products. Each time the page is reloaded the order of those products changes. Does this have a negative effect on the pages' rankings? Thank you
Technical SEO | | Kerry_Jones2 -
Photo Attachment Page
Hi! I am using Wordpress, and when I click on my photos, they go to an attachment page that only shows the photo. However, in my SeoMoz crawl results, these are showing up as duplicate content page. Is it worth going back and changing all of my photos so they aren't clickable and therefore do not create a duplicate page? Thanks in advance! Jodi familytravelmagazine.com
Technical SEO | | JodiFTM0 -
Translating Page Titles & Page Descriptions
I am working on a site that will be published in the original English, with localized versions in French, Spanish, Japanese and Chinese. All the versions will use the English information architecture. As part of the process, we will be translating the page the titles and page descriptions. Translation quality will be outstanding. The client is a translation company. Each version will get at least four pairs of eyes including expert translators, editors, QA experts and proofreaders. My question is what special SEO instructions should be issued to translators re: the page titles and page descriptions. (We have to presume the translators know nothing about SEO.) I was thinking of: stick to the character counts for titles and descriptions make sure the title and description work together avoid over repetition of keywords page titles (over-optimization peril) think of the descriptions as marketing copy try to repeat some title phrases in the description (to get the bolding and promote click though) That's the micro stuff. The macro stuff: We haven't done extensive keyword research for the other languages. Most of the clients are in the US. The other language versions are more a demo of translation ability than looking for clients elsewhere. Are we missing something big here?
Technical SEO | | DanielFreedman0 -
Find where the not selected pages are from
Hi all Can anyone suggest how I can find where gtoogle is finding approx. 1000 pages not to select? In round numbers I have 110 pages on the site site: searech shows all pages index status shows 110 slected and 1000 not selected. For the life of me I cannot fingure where these pages are coming from. I have set my prefered domain to www., setup 301 's to www. as per below RewriteCond %{HTTP_HOST} ^growingyourownveg.com$
Technical SEO | | spes123
RewriteRule ^(.*)$ "http://www.growingyourownveg.com/$1" [R=301,L] site is www.growingyourownveg.com any suggestions much appreciated Simon0 -
Redirecting over-optimised pages
Hi One of my clients websites was affected by Penguin and due to no 'bad link' messages, and nothing really obvious from the backlink profile, I put it down to over-optimisation on the site. I noticed a lot of spammy pages and duplicate content, and submitted recommendations to have these fixed. They dragged their heels for a while and eventually put in plans for a new site (which was happening anyway), but its taken quite a while and is only just going live in a couple of weeks. My question is, should I redirect the URLs of the previously over-optimised pages? Obviously the new pages are nice and clean and from what I can tell there are no bad links pointing to the URLs, so is this an acceptable practice? Will Google notice this and remove the penalty? Thanks
Technical SEO | | Coolpink0 -
How To SEO Mobile Pages?
hello, I have finally put my first foot on the path of trying to learn and understand mobile SEO. I have a few questions regarding mobile SEO and how it works, so please help me out. I use wordpress for my site, and there is a nifty plugin called WP touch http://wordpress.org/extend/plugins/wptouch/ What it basically does is, it converts your desktop version into a mobile friendly version. I wanted to know that if it does that, does this mean whatever SEO i do for my regular web site gets accomplished for my moible version as well? Another simple question is, if i search for the same term on my mobile phone then on my desktop how different will the SERs be? thanks moz peeps
Technical SEO | | david3050 -
No. of links on a page
Is it true that If there is a huge number of links from the source page then each link will provide very little value in terms of passing link juice ?
Technical SEO | | seoug_20050