Inner pages of a directory site wont index
-
I have a business directory site thats been around a long time but has always been split into two parts, a subdomain and the main domain. The subdomain has been used for listings for years but just recently Ive opened up the main domain and started adding listings there.
The problem is that none of the listing pages seem to be betting indexed in Google. The main domain is indexed as is the category page and all its pages below that eg /category/travel but the actual business listing pages below that will not index. I can however get them to index if I request Google to crawl them in search console.
A few other things:
I have nothing blocked in the robots.txt file
The site has a DA over 50 and a decent amount of backlinks
There is a sitemap setup also
any ideas?
-
Great! I'll mark this as resolved then.
Craig
-
Hi Craig,
All the old content is still on the sub-domain, none of this content is on the new domain though so there shouldnt be duplicate content issues.
Im not sure tbh, probably leaving the old sub domain as is, already I redirect the sign-up/submisisons page to the new domain so new content will go there.
I should add that the sub-domain is hosted on a different server.
I had forgotten about this thread actually and just done a quick check and it seems that all but 1-2 recent postings are indexed now so Im guessing it was just taking some time for Google to crawl the site properly?
-
Hi Mark,
What have you done with all the old content on the sub-domain? Is the plan to move everything to a sub-folder or are you going to have some in both places? Also, is the content on the sub-folder different to the sub-domain or are you just moving content?
Craig
-
..and actually just done another check and it seems that 5/6 of the most recent listings are all indexed fine and this is without me doing anything, I have no idea why one of them isnt though, nothing is different on that page plus its the oldest out of the 6 and all of them are featured and linked to from the homepage so page rank should be flowing directly to them...
-
Hi Michael,
Sorry for the delayed response, no the inner pages arent setup to be nofollowed or no indexed, they all have canonicals pointing back to themself which I think is correct right?
-
Do you have the links set to your listings set to nofollow or the actual listing pages set to noindex? Are their canonicals set on them pointing back to a main page? There are a number of technical reasons that could cause this problem, but it is hard to say without seeing the site or code.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages are Indexed but not Cached by Google. Why?
Hello, We have magento 2 extensions website mageants.com since 1 years google every 15 days cached my all pages but suddenly last 15 days my websites pages not cached by google showing me 404 error so go search console check error but din't find any error so I have cached manually fetch and render but still most of pages have same 404 error example page : - https://www.mageants.com/free-gift-for-magento-2.html error :- http://webcache.googleusercontent.com/search?q=cache%3Ahttps%3A%2F%2Fwww.mageants.com%2Ffree-gift-for-magento-2.html&rlz=1C1CHBD_enIN803IN804&oq=cache%3Ahttps%3A%2F%2Fwww.mageants.com%2Ffree-gift-for-magento-2.html&aqs=chrome..69i57j69i58.1569j0j4&sourceid=chrome&ie=UTF-8 so have any one solutions for this issues
Technical SEO | | vikrantrathore0 -
Problem with duplicate pages due to mobile site.
Hey everyone, We've got an issue where our current shopping cart provider (Volusion) allows us to use canonical and rel="alternate" links, however the canonical links are forced on our Desktop as well as mobile pages. When they should only be on the mobile pages. You can view what I mean at the below two pages: http://www.absoluteautomation.ca/fgd400-sensaphone400-p/fgd400.htm https://www.absoluteautomation.ca/mobile/Product.aspx?ProductCode=FGD400 Does anyone have any ideas in terms of working around this?
Technical SEO | | absoauto0 -
Can You Use More Then One Google Local Rich Snippet on a single site/ on a single page.
I am currently working on a website for a business that has multiple office locations. As I am trying to target all four locations I was wondering if it is okay to have more then one Local Rich Snippet on a single page. (For example they list all four locations and addresses within their footer and I was wondering if I could make these local rich snippets). What about having more then one on a single website. For example if a company has multiple offices located in several different cities and have set up individual contact pages for these cities, can each page have it's own Local Rich Snippet? Will Google look at these multiple "local rich snippets" as spaming or will they recognize the multiple locations and count it towards their local seo?
Technical SEO | | webdesignbarrie1 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Google search result going to a page that I did not put on my site
Hi, I am seeing a very strange result in google for my site. When doing a search for the term "london reflexology" my site comes up 18th in the results. But when I click the link or check the URL it shows up as: http://www.reflexologyonline.co.uk/reflexologyonline.php?Action=Webring This is not right at all. It looks like some sort of cloaking but I am not sure. I am new to SEO and I do not know why goole is showing this URL that does not exist on my site and of witch the content is totally wrong. Can anyone please help with this? See the 2 linked images for more details. It seems to me the site might be hacked or something to that effect. Please help.... jyJdP.png 71Mf4.png
Technical SEO | | RupDog0 -
I have a site that has both http:// and https:// versions indexed, e.g. https://www.homepage.com/ and http://www.homepage.com/. How do I de-index the https// versions without losing the link juice that is going to the https://homepage.com/ pages?
I can't 301 https// to http:// since there are some form pages that need to be https:// The site has 20,000 + pages so individually 301ing each page would be a nightmare. Any suggestions would be greatly appreciated.
Technical SEO | | fthead90 -
International Site, flow of page rank?
OK. I'm working on an international site. The site is setup with folders for UK, US, AU e.g www.site.com/UK/index.aspx The root (non folder based) is the international version of the site e.g www.site.com/index.aspx www.site.com/index.aspx has the lions share of links. Therefore, the pages immediately linked from www.site.com/index.aspx have page rank distributed between them. My UK, US and AU home pages are linked via a country selector from the www.site.com/index.aspx page via an aspx redirect page that 301's to the appropriate country home page. Therefore the home pages of UK, US, AU are recieving some of the 'juice' that is coming in to www.site.com/index.aspx (but only a fraction via the redirect links) Am I right in thinking that pages on the international version of the site will have much more potential to rank (because of their 'juice') than the pages on UK, US and AU versions of the site? If so, am I right in thinking that these will tend to rank over the equivalent UK, US and AU versions of the pages in each country version of Google despite having set directory level Geo-targetting in GWT?
Technical SEO | | QubaSEO1