Single URL not indexed
-
Hi everyone!
Some days ago, I noticed that one of our URLs (http://www.access.de/karriereplanung/webinare) is no longer in the Google index.
We never had any form of penalty, link warning etc. Our traffic by Google is constantly growing every month. This single page does not have an external link pointing to it - only internal links.
The page has been indexed all the time. The HTTP status code is 200, there is no noindex or something in the code. I submitted the URL on GWMT to let Google send it to the index. It was crawled successfully by Google, sent to the index 5 days ago - nothing happened, still not indexed.
Do you have any suggestions why this page is no longer indexed? It is well linked internally and one click away from the home page. There is still the PR of 5 showing, I always thought that pages with PR are indexed.......
-
Hi Nick,
first of all, thanx for your responses.
I already did the "fetch as Googlebot" thing 5 days ago. The page was successfully crawled and has been sent to the index successfully, according to Google Webmaster Tools. But in these 5 days, nothing changed.
I like your suggestions with the extra text. We will add some and do the "fetch as Googlebot" again and see what happens.
And you are absolutely right when it comes to the "value" of this page. It didn't send that much traffic, just a little. It is no big deal for us if this page doesn't get back into the index - but as someone doing SEO I want to figure out the problem Google seems to have with this page - just to test and learn for future problems
-
Replying to myself because I just noticed something I was wrong about.
I thought that the first box at the top was an excerpt of the page it links to, but it looks like it IS actually unique.
So you probably don't need to add anything, though expanding on that text in the first box might be a good idea.
Try to get a link to that page and see if that helps.
-
The thing is those words do appear elsewhere on the site, and Google can probably figure out that what is on this particular page is excerpts and links to the originals.
This normally isn't a huge problem, though. Lots of sites and blogs have category and tag pages that fit that description and ARE indexed (though many are not).
Before messing around with adding text which you may not really need to add, try doing a Fetch as Googlebot of the page in Google Webmaster Tools and hit the submit button when the fetch is complete. It may be that the page just got dropped by accident. If it doesn't return to the index after a few days, try adding a little totally unique content. Just a sentence or two about what these links are should be enough. I have done this on a few sites with lots of thin tag or category pages and it doesn't take a lot of text to get them into the index.
Partner link pages are also typically thin, but they may be indexed anyway if the links are useful, or ignored if it is simply a link exchange page that doesn't really have any value other than swapping links (which isn't much value). Like most things related to Google search, there isn't always a specific thing that will make the difference.
What you may want to consider is whether or not you want or need that page to appear in search, and if you think it could or should actually rank well for anything. If it doesn't matter, I wouldn't be too concerned unless there are many pages on the site that are not indexed.
-
Quite strange - I see someone visiting this URL in the Google-Analytics real-time-report.
Traffic source is direct, and Google labels this site as "/empty". Any ideas why?
-
Hi Nick,
I knwo the page is not full of content - but if you count the words, they are almost 300. And we do not have pages with the same content or links on our domain.
It could be a solution to add more text, but what about pages with partner links, for example? They normally have no content and lots of external links - so they should also be seen an "thin pages"?!
-
It may be worth generating and submitting an XML sitemap, with this page relatively high up in the map, and submitting it to Google. This then might prompt Google to crawl the page and index it.
ScreamingFrog is a free tool that generates an XML sitemap for you, while there are also free generators out there as well with just a quick google search.
-
Hi Tom,
well, honestly, we do not have a sitemap...
And no, there are no other pages with similar content on our domain.
As you said it: quite odd!
-
It may have been dropped because it was seen as "thin" content. Since most of the page is excerpts from and links to other pages, it is likely being ignored - especially if there are other pages that have the same excerpts and links. If you can add unique, some descriptive text to the page, it may do better.
And about the PageRank: The PR you can see in the Toolbar or other PR checks is usually very out of date. It could be that prior to your page's disappearance, it had a high PR and really does not now. While the visible PR can be used to get a pretty good idea of how Google ranks a page, I wouldn't give it much thought. Plenty of low PR pages rank very well for whatever search terms they are targeting, and lots of high PR pages don't rank very well.
-
That is quite odd - checked all those things from my end and found the same, but still not indexed.
My only other check at this stage would be to ask if its in the .xml sitemap that you have submitted in Google Webmaster Tools? And whether or not this page features similar content to any other pages on your site?
You've probably checked both already, but thought I'd ask just to be sure.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Submitted URL has crawl issue - Submitted URL seems to be a Soft 404 - but all looks fine
Google Search Console is showing some pages up as "Submitted URL has crawl issue" but they look fine to me. I have set them as fixed but after a month they were finally re-crawled and google states the issue persists. Examples are: https://www.rscpp.co.uk/counselling/175809/psychology-alcester-lanes-end.html
Technical SEO | | TommyNewmanCEO
https://www.rscpp.co.uk/browse/location-index/889/index-of-therapy-in-hanger-lane.html
https://www.rscpp.co.uk/counselling/274646/psychology-waltham-forest-sexual-problems.html There's also some "Submitted URL seems to be a Soft 404": https://www.rscpp.co.uk/counselling/112585/counselling-moseley-depression.html I also have more which are "pending", but again I couldn't see a problem with them in the first place. I'm at a bit of a loss as to what to do next. Any advice? Thanks in advance.0 -
New URL Structure
Hi Guy's, For our webshop we're considering a new URL structure because longtail keywords to rank so well. Now we have /category (main focus keywords)
Technical SEO | | Happy-SEO
/product/the-product345897345123/ (nice to rank on, not that much volume) We have over 500 categories and every one of them is placed after our domain. Because i think it's better to work with a good structure and managed a way to make categories and sub-categories. The 500 categories may be the case why not every one of them is ranking so well, so that was also the choice of thinking about a new structure. So the new URL structure will be: /category (main focus keywords)
/category/subcat/ (also main focus keywords) Everything will be redirect (301, good way), so i think there won't be to much problems. I'm thinking about what to do with the /product/ URL. Because now it will be on the same level as the subcategories, and i'm affraid that when it's on that level, Google will give the same value to both of them. My options that i'm considering are: **Old way **
/product/the-product-345897345123/ .html (seen this on big webshops)
/product/the-product-345897345123.html/ Level deeper SKU /product/the-product/345897345123/ What would you suggest? The new structure would be 20 categories 500+ sub's devided under main categories 5000+ products Thanks!0 -
Why Are Some Pages On A New Domain Not Being Indexed?
Background: A company I am working with recently consolidated content from several existing domains into one new domain. Each of the old domains focused on a vertical and each had a number of product pages and a number of blog pages; these are now in directories on the new domain. For example, what was www.verticaldomainone.com/products/productname is now www.newdomain.com/verticalone/products/product name and the blog posts have moved from www.verticaldomaintwo.com/blog/blogpost to www.newdomain.com/verticaltwo/blog/blogpost. Many of those pages used to rank in the SERPs but they now do not. Investigation so far: Looking at Search Console's crawl stats most of the product pages and blog posts do not appear to be being indexed. This is confirmed by using the site: search modifier, which only returns a couple of products and a couple of blog posts in each vertical. Those pages are not the same as the pages with backlinks pointing directly at them. I've investigated the obvious points without success so far: There are a couple of issues with 301s that I am working with them to rectify but I have checked all pages on the old site and most redirects are in place and working There is currently no HTML or XML sitemap for the new site (this will be put in place soon) but I don't think this is an issue since a few products are being indexed and appearing in SERPs Search Console is returning no crawl errors, manual penalties, or anything else adverse Every product page is linked to from the /course page for the relevant vertical through a followed link. None of the pages have a noindex tag on them and the robots.txt allows all crawlers to access all pages One thing to note is that the site is build using react.js, so all content is within app.js. However this does not appear to affect pages higher up the navigation trees like the /vertical/products pages or the home page. So the question is: "Why might product and blog pages not be indexed on the new domain when they were previously and what can I do about it?"
Technical SEO | | BenjaminMorel0 -
Canonical URL
Hi there Our website www.snowbusiness.com has a non www version and this one has 398 backlinks. What is the best way of transfering this link value if i establish the www. address as the canonical URL? Thanks, Ben
Technical SEO | | SnowFX0 -
Website is not indexed in Google
Hi Guys, I have a problem with a website from a customer. His website is not indexed in Google (except for the homepage). I could not find anything that can possibly be the cause. I already checked the robots.txt, sitemap, and plugins on the website. In the HTML code i also couldn't find anything which makes indexing harder than usual. This is the website i am talking about: http://www.xxxx.nl/ (Dutch) The only thing that i am guessing now is the Google sandbox, but even that is quite unlikely. I hope you guys discover something i could not find! Thanks in advance 🙂
Technical SEO | | B.Great0 -
Spider Indexed Disallowed URLs
Hi there, In order to reduce the huge amount of duplicate content and titles for a cliënt, we have disallowed all spiders for some areas of the site in August via the robots.txt-file. This was followed by a huge decrease in errors in our SEOmoz crawl report, which, of course, made us satisfied. In the meanwhile, we haven't changed anything in the back-end, robots.txt-file, FTP, website or anything. But our crawl report came in this November and all of a sudden all the errors where back. We've checked the errors and noticed URLs that are definitly disallowed. The disallowment of these URLs is also verified by our Google Webmaster Tools, other robots.txt-checkers and when we search for a disallowed URL in Google, it says that it's blocked for spiders. Where did these errors came from? Was it the SEOmoz spider that broke our disallowment or something? You can see the drop and the increase in errors in the attached image. Thanks in advance. [](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a> LAAFj.jpg
Technical SEO | | ooseoo0 -
Index page
To the SEO experts, this may well seem a silly question, so I apologies in advance as I try not to ask questions that I probably know the answer for already, but clarity is my goal I have numerous sites ,as standard practice, through the .htaccess I will always set up non www to www, and redirect the index page to www.mysite.com. All straight forward, have never questioned this practice, always been advised its the ebst practice to avoid duplicate content. Now, today, I was looking at a CMS service for a customer for their website, the website is already built and its a static website, so the CMS integration was going to mean a full rewrite of the website. Speaking to a friend on another forum, he told me about a service called simple CMS, had a look, looks perfect for the customer ... Went to set it up on the clients site and here is the problem. For the CMS software to work, it MUST access the index page, because my index page is redirected to www.mysite.com , it wont work as it cant find the index page (obviously) I questioned this with the software company, they inform me that it must access the index page, I have explained that it wont be able to and why (cause I have my index page redirected to avoid duplicate content) To my astonishment, the person there told me that duplicate content is a huge no no with Google (that's not the astonishing part) but its not relevant to the index and non index page of a website. This goes against everything I thought I knew ... The person also reassured me that they have worked within the SEO area for 10 years. As I am a subscriber to SEO MOZ and no one here has anything to gain but offering advice, is this true ? Will it not be an issue for duplicate content to show both a index page and non index page ?, will search engines not view this as duplicate content ? Or is this SEO expert talking bull, which I suspect, but cannot be sure. Any advice would be greatly appreciated, it would make my life a lot easier for the customer to use this CMS software, but I would do it at the risk of tarnishing the work they and I have done on their ranking status Many thanks in advance John
Technical SEO | | Johnny4B0 -
Would you shorten this url, and if so how?
I designed the structure of my website way before I even thought about SEO. I run a website that requires me to categorize articles is somewhat deep nested categories so an example url would be as follows http://www.yakangler.com/articles/news/new-products/boats/item/1442-jackson-kayak-launches-the-big-tuna Would you shorten the url to somethign like this? http://www.yakangler.com/a/n/np/b/item/1442-jackson-kayak-launches-the-big-tuna If so how would you manage the redirects I'm unsure how to add a 301 redirect in my .htaccess file that wouldn't require me to add one for every single article. Could I do it with a rule that recognizes only the middle part of the url and redirect it accordingly? Thanks for any advice you might have!
Technical SEO | | mr_w0