Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to get Google to index another page
-
Hi,
I will try to make my question clear, although it is a bit complex.
For my site the most important keyword is "Insurance" or at least the danish variation of this.
My problem is that Google are'nt indexing my frontpage on this, but are indexing a subpage - www.mydomain.dk/insurance instead of www.mydomain.dk.
My link bulding will be to subpages and to my main domain, but i wont be able to get that many links to www.mydomain.dk/insurance.
So im interested in making my frontpage the page that is my main page for the keyword insurance, but without just blowing the traffic im getting from the subpage at the moment.
Is there any solutions to do this?
Thanks in advance.
-
Hi Kate,
Thanks for your reply, im glad for yours and all others contribution, it really make me reflect on which direction i should choose.
The whole page is an insurance page, but there is many different things you can do, you can buy insurance, calculate prize, make a claim and so on, so the frontpage has many purposes.
The reason why i want to be ranked on the frontpage, is because it converts better, and im sure i can get it ranked higher, in the long run, because of the link building oppertunities i have on that page.
So right now, it maybe would preferrebly to focus on /insurance, but in the long run i think the frontpage will be stronger.
It's also important for me to underline, that im a part of a big company, and its not entirely up to me to decide changes and etc. on the homepage.
-
Keith and Bryan are dead on. If your homepage is better for users looking for insurance, then why is there an insurance page? My guess is because your site offers more than insurance.
Yes, getting links to the homepage will be easier, but that should not be the reason for killing a well ranking page that seems to be the better page. I'd recommend updating the insurance page to make it convert like the homepage and let it remain the focus of insurance traffic. Then make sure that your homepage links to the insurance page with good anchor text, and that it doesn't link to another page with any insurance related text.
-
Hi,
Thank you, and all other, for your replies - it's very helpful.
The reason i want the traffic on the frontpage is the following:
-
It's alot easier for me to get link juice to the frontpage, at the moment we havent been working focused with link building, but when we start doing that, it will be alot easier to get links to the frontpage.
-
Im quite sure the frontpage is a better landing page for this traffic than the /insurance page, so the conversion rates and etc. will be better on the frontpage.
-
-
Again, my advise would be the same as Bryan's - leave it alone and build more links to the insurance page.
I don't see a problem with that page out ranking your home page?
-
Rel canonical will kill the insurance page, likely not the best idea!
-
I don't advise 301 redirecting /insurance as it looks to me like it is very strong. It will lose most of its link juice when redirected. Rather optimise your homepage with the desired keyword. Google chooses the most relevant page which is why /insurance is showing up for insurance. As it should! Don't worry about your home page, rather link to the /insurance page and your entire domain will gain strength over time. Actually the deep linked /insurance page will likely get stronger quicker than trying to get the home page stronger.
Once you link all everything to the home page it will eventually get stronger so long as it contains Insurance in the title and is optimised for this word.
-
I would either use rel canonical as Nakul instructed above, although this is a bit of hack IMO.
Or just build some better quality links at the page you want to rank, I don't see a problem with building links to the /insurance page?
-
If that's the case, I would suggest doing a rel=canonical tag on www.tryg.dk/forsikring to your homepage. Essentially, it's like a 301 for search engines but not for the users. The users will be able to come to your homepage, use navigation and or other links to visit www.tryg.dk/forsikring or they can come from external links which are linking to www.tryg.dk/forsikring. But Google will de-index that page and rank the homepage instead.
I hope that helps. The exact canonical tag you would use on www.tryg.dk/forsikring page would be :
-
Hi Keith,
the page is www.tryg.dk and it is ranked, i just want it to rank on keyword "insurance" as well, as the link building oppertunities on that page is alot better than on www.tryg.dk/forsikring.
A 301 redirect wont work, since i still want people to be able to access www.tryg.dk/forsikring.
-
Hi, the domain is www.tryg.dk and it is cached and indexed in Google.
Because of the possibilities to link build to my main page, i would like to optimize that to the keyword "Insurance", and that process i could start right away.
My question is if there is any possibility to get Google to "understand" this, or i has to work my way up with my main page, whilst i stop focus on www.tryg.dk/forsikringer - that is at this moment the page getting the highest ranking on "insurance".
-
I suspect his home page is just not ranking for the keywords, could you post the URL in question?
If not then type
_site:www.yourdomain.com _
Into Google and let us know if your site is listed.
Thanks,
-
are you saying your homepage is not indexed/cached in Google ? If that's the case, there might be something wrong. There might be a noindex tag, some sort of a disallow to your homepage. Personally, I can't remember seeing a similar scenario where inner pages are indexed and the homepage is not. If it's not ranking but is cached, that's another issue.
Please confirm.
-
Optimise your home page for your chosen keyword and 301 the page /insurance to your root domain.
To do this you need to create a .htaccess file in the / directory on your server and enter the following information.
Redirect 301 /insurance http://www.mydomain.dk
About 90% of the link juice "should" flow through if you use a 301 redirect, you can test the above is working by visting http://www.mydomain.dk/insurance and making sure it redirect you to your home page.
Note that a 301 can take a few weeks (possibly longer) for Google to pick up so do not expect overnight changes.
Hope that helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google tries to index non existing language URLs. Why?
Hi, I am working for a SAAS client. He uses two different language versions by using two different subdomains.
Technical SEO | | TheHecksler
de.domain.com/company for german and en.domain.com for english. Many thousands URLs has been indexed correctly. But Google Search Console tries to index URLs which were never existing before and are still not existing. de.domain.com**/en/company
en.domain.com/de/**company ... and an thousand more using the /en/ or /de/ in between. We never use this variant and calling these URLs will throw up a 404 Page correctly (but with wrong respond code - we`re fixing that 😉 ). But Google tries to index these kind of URLs again and again. And, I couldnt find any source of these URLs. No Website is using this as an out going link, etc.
We do see in our logfiles, that a Screaming Frog Installation and moz.com w opensiteexplorer were trying to access this earlier. My Question: How does Google comes up with that? From where did they get these URLs, that (to our knowledge) never existed? Any ideas? Thanks 🙂0 -
Does Google index internal anchors as separate pages?
Hi, Back in September, I added a function that sets an anchor on each subheading (h[2-6]) and creates a Table of content that links to each of those anchors. These anchors did show up in the SERPs as JumpTo Links. Fine. Back then I also changed the canonicals to a slightly different structur and meanwhile there was some massive increase in the number of indexed pages - WAY over the top - which has since been fixed by removing (410) a complete section of the site. However ... there are still ~34.000 pages indexed to what really are more like 4.000 plus (all properly canonicalised). Naturally I am wondering, what google thinks it is indexing. The number is just way of and quite inexplainable. So I was wondering: Does Google save JumpTo links as unique pages? Also, does anybody know any method of actually getting all the pages in the google index? (Not actually existing sites via Screaming Frog etc, but actual pages in the index - all methods I found sadly do not work.) Finally: Does somebody have any other explanation for the incongruency in indexed vs. actual pages? Thanks for your replies! Nico
Technical SEO | | netzkern_AG0 -
How to check if an individual page is indexed by Google?
So my understanding is that you can use site: [page url without http] to check if a page is indexed by Google, is this 100% reliable though? Just recently Ive worked on a few pages that have not shown up when Ive checked them using site: but they do show up when using info: and also show their cached versions, also the rest of the site and pages above it (the url I was checking was quite deep) are indexed just fine. What does this mean? thank you p.s I do not have WMT or GA access for these sites
Technical SEO | | linklander0 -
Removed Subdomain Sites Still in Google Index
Hey guys, I've got kind of a strange situation going on and I can't seem to find it addressed anywhere. I have a site that at one point had several development sites set up at subdomains. Those sites have since launched on their own domains, but the subdomain sites are still showing up in the Google index. However, if you look at the cached version of pages on these non-existent subdomains, it lists the NEW url, not the dev one in the little blurb that says "This is Google's cached version of www.correcturl.com." Clearly Google recognizes that the content resides at the new location, so how come the old pages are still in the index? Attempting to visit one of them gives a "Server Not Found" error, so they are definitely gone. This is happening to a couple of sites, one that was launched over a year ago so it doesn't appear to be a "wait and see" solution. Any suggestions would be a huge help. Thanks!!
Technical SEO | | SarahLK0 -
Why is Google Webmaster Tools showing 404 Page Not Found Errors for web pages that don't have anything to do with my site?
I am currently working on a small site with approx 50 web pages. In the crawl error section in WMT Google has highlighted over 10,000 page not found errors for pages that have nothing to do with my site. Anyone come across this before?
Technical SEO | | Pete40 -
Sitemap indexed pages dropping
About a month ago I noticed my pages indexed from my sitemap are dropping.There are 134 pages in my sitemap and only 11 are indexed. It used to be 117 pages and just died off quickly. I still seem to be getting consistant search traffic but I'm just not sure whats causing this. There are no warnings or manual actions required in GWT that I can find.
Technical SEO | | zenstorageunits0 -
Should i index or noindex a contact page
Im wondering if i should noindex the contact page im doing SEO for a website just wondering if by noindexing the contact page would it help SEO or hurt SEO for that website
Technical SEO | | aronwp0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0