Why this page doesn't get indexed?
-
Hi, I've just taken over development and SEO for a site and we're having difficulty getting some key pages indexed on our site. They are two clicks away from the homepage, but still not getting indexed. They are recently created pages, with unique content on. The architecture looks like this:Homepage >> Car page >> Engine specific pageWhenever we add a new car, we link to its 'Car page' and it gets indexed very quickly. However the 'Engine pages' for that car don't get indexed, even after a couple of weeks. An example of one of these index pages are - http://www.carbuzz.co.uk/car-reviews/Volkswagen/Beetle-New/2.0-TSISo, things we've checked - 1. Yes, it's not blocked by robots.txt2. Yes, it's in the sitemap (http://www.carbuzz.co.uk/sitemap.xml)3. Yes, it's viewable to search spiders (e.g. the link is present in the html source)This page doesn't have a huge amount of unique content. We're a review aggregator, but it still does have some. Any suggestions as to why it isn't indexed?Thanks, David
-
Hi David,
Apologies, in my haste I didnt take in it was the engine page. I hole heartedly agree with the other comments made here. I would also add that, although not the be all and end all, you may find that linking out to to other sites may help the page get indexed as well. If there are any additional, authoritative resources (that are not competition) it may be worth linking to a few of these. Adds value to the user as well.
-
A factor could be another page was not indexed on the previous version of the site, which is presently indexed. On this site, the "other" page was indexed first, leading to this page not being indexed.
I noticed your site has other 2.0-TSI pages, and they were VW pages as well.
-
Interesting in the previous (completely version of the site) that page used to get crawled fine, but it had more content (but exactly the same as the parent one).
I wonder whether I should accept some duplication, just bite the bullet and write lots of original content, maybe find a way to explain google (using microformats) that it's a page that aggregates reviews.
I still need to figure out whether is it more worthwhile focusing on long tail keywords that the engine page would allow (E.g. Audi A6 1.4 Bluemotion TDI reviews) or just merge it together with the parent one for a much more content rich (Audi A6 Reviews).
-
Completely agree with Ryan, as an in-house SEO of a company with hundreds of thousands of pages I can guarantee you that not every page you have will be indexed via your sitemaps. The best thing to do is just use the tips that Ryan stated above. Link earning or content building, content building being the easier of the two.
-
Google has specifically stated they do not guarantee they will index all pages of a website. For Google to index a page, they have to decide the page offers value.
When I examine the page I note the following:
-
the page's comments are all snippets from other pages which have the full comments. There is no unique content for the comments.
-
the page's content is a total of 6 sentences, several of which are common to other pages on your site or are otherwise generic such as "Use the filter above to see reviews for the other engines."
-
the page has no backlinks to it, and the linking page has no links either. There are no apparent off page factors to indicate this page is important.
If you were to earn links to the page, my bet is it would be indexed. If you were to add a decent amount of quality content on the page, it would probably be indexed as well.
-
-
That's not the problem. The car in question got crawled correctly. It's the engine below it that didn't get crawled.
Anyway the car-chooser page is not the only way to get to cars.
-
Hi David,
When you first load the "Car Page" http://www.carbuzz.co.uk/car-chooser only the first few cars show. THe rest are only accessible when you scroll down the page which then activates some AJAX (i think) which loads more cars i the listing.
I suspect this has something to do with is. The bots cant access the "more" listings.
You could use an HTML sitemap on the site to help them get indexed but it would be better if the pages were linked to by default from the listings page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirects and site map isn't showing
We had a malware hack and spent 3 days trying to get Bluehost to fix things. Since they have made changes 2 things are happening: 1. Our .xml sitemap cannot be created https://www.caffeinemarketing.co.uk/sitmap.xml we have tried external tools 2. We had 301 redirects from the http (www and non www versions) nad the https;// (non www version) throughout the whole website to https://www.caffeinemarketing.co.uk/ and subsequent pages Whilst the redirects seem to be happening, when you go into the tools such as https://httpstatus.io every version of every page is a 200 code only whereas before ther were showing the 301 redirects Have Bluehost messed things up? Hope you can help thanks
Technical SEO | | Caffeine_Marketing0 -
What's best practice for cart pages?
i don't mean e-commerce site in general, but the actual cart page itself. What's best practice for the links that customers click to add products to the cart, and the cart page itself? Also, I use vanity URLs for my cart links which redirect to the actual cart page with the parameters applied. Should I use use 301 or 302 redirects for the links? Do I make the cart page's canonical tag point back to the store home page so that I'm not accruing link juice to a page that customers don't actually want to land on from search? I'm kinda surprised at the dearth of information out there on this, or maybe I'm not looking in the right places?
Technical SEO | | VM-Oz0 -
Get List Of All Indexed Google Pages
I know how to run site:domain.com but I am looking for software that will put these results into a list and return server status (200, 404, etc). Anyone have any tips?
Technical SEO | | InfinityTechnologySolutions0 -
Traffic on my website hasn't gone up since
Anyone please I am looking for some help!! My website used to get around 40 to 50 visitors a day, as soon as I created the new website and put it live, traffic has dropped by 25%, page authority for the new and some of the old URL's are only 1, but my keywords are still doing well? I have made sure that I have redirected all the old URL's to the new ones, the tracking code in at the end section of the head section. Any ideas anyone?
Technical SEO | | One2OneDigital0 -
Why blocking a subfolder dropped indexed pages with 10%?
Hy Guys, maybe you can help me to understand better: on 17.04 I had 7600 pages indexed in google (WMT showing 6113). I have included in the robots.txt file, Disallow: /account/ - which contains the registration page, wishlist, etc. and other stuff since I'm not interested to rank with registration form. on 23.04 I had 6980 pages indexed in google (WMT showing 5985). I understand that this way I'm telling google I don't want that section indexed, by way so manny pages?, Because of the faceted navigation? Cheers
Technical SEO | | catalinmoraru0 -
Can you noindex a page, but still index an image on that page?
If a blog is centered around visual images, and we have specific pages with high quality content that we plan to index and drive our traffic, but we have many pages with our images...what is the best way to go about getting these images indexed? We want to noindex all the pages with just images because they are thin content... Can you noindex,follow a page, but still index the images on that page? Please explain how to go about this concept.....
Technical SEO | | WebServiceConsulting.com0 -
Carl errors on urls that don't normally exist
Hi, I have been having heaps (thousands) of SEOMoz crawl errors on urls that don't exist normally like: mydomain.com/RoomAvailability.aspx?DateFrom=2012-Oct-26&rcid=-1&Nights=2&Adults=1&Children=0&search=BestPrice These urls are missing siteids and other parameters and I can't see how they are gererated. Does anyone have any ideas on where MOZ is finding them ? Thanks Stephen
Technical SEO | | digmarketingguy0 -
Existing Pages in Google Index and Changing URLs
Hi!! I am launching a newly recoded site this week and had a another noobie question. The URL structure has changed slightly and I have installed a 301 redirect to take care of that. I am wondering how Google will handle my "old" pages? Will they just fall out of the index? Or does the 301 redirect tell Google to rewrite the URLs in the index? I am just concerned I may see an "old" page and a "new" page with the same content in the index. Just want to make sure I have covered all my bases. Thanks!! Lynn
Technical SEO | | hiphound0