How do the Quoras of this world index their content?
-
I am helping a client index lots and lots of pages, more than one million pages. They can be seen as questions on Quora. In the Quora case, users are often looking for the answer on a specific question, nothing else.
On Quora there is a structure setup on the homepage to let the spiders in. But I think mostly it is done with a lot of sitemaps and internal linking in relevancy terms and nothing else... Correct? Or am I missing something?
I am going to index about a million question and answers, just like Quora. Now I have a hard time dealing with structuring these questions without just doing it for the search engines. Because nobody cares about structuring these questions. The user is interested in related questions and/or popular questions, so I want to structure them in that way too.
This way every question page will be in the sitemap, but not all questions will have links from other question pages linking to them. These questions are super longtail and the idea is that when somebody searches this exact question we can supply them with the answer (onpage will be perfectly optimised for people searching this question). Competition is super low because it is all unique user generated content.
I think best is just to put them in sitemaps and use an internal linking algorithm to make the popular and related questions rank better. I could even make sure every question has at least one other page linking to it, thoughts?
Moz, do you think when publishing one million pages with quality Q/A pages, this strategy is enough to index them and to rank for the question searches? Or do I need to design a structure around it so it will all be crawled and each question will also receive at least one link from a "category" page.
-
Wow, that is insane right?
https://www.quora.com/sitemap/questions?page_id=50
I wonder how long this carries on.
-
Quora don't seem to have a XML sitemap but a HTML one :
[https://www.quora.com/robots.txt](https://www.quora.com/robots.txt) refers to [https://www.quora.com/sitemap](https://www.quora.com/sitemap)
-
Yes there are many challenges and external linking is definitely one of them.
What do you think about sitemaps to get this longtail indexed? I think that a lot can be indexed by submitting the sitemaps.
-
There are many challenges to building a really large site. Most of them are related to building the site, but one that often kills the success of the site is the ability to get the pages into the index and keep them there. This requires a steady flow of spiders into the deepest pages of the site. If you don't have continuous and repetitive spider flow the pages will be indexed, but then forgotten, before the spiders return.
An effective way to get deep spidering is have powerful links permanently connected to many deep hub pages throughout the site. This produces a flow of spiders into the site and forces them to chew their way out, indexing pages as they go. These links must be powerful or the spiders will index a couple of pages and die. These links must be permanent because if they are removed the flow of spiders will stop and pages in the index will be forgotten.
The goal of the hub pages is to create spider webs through the site that allow spiders to index all of the pages on short link paths, rather than requiring the spiders to crawl through long tunnels of many consecutive links to get everything indexed.
Lots of people can build a big site, but only some of those people have the resources to get the powerful, permanent links that are required to get the pages indexed and keep them in the index. You can't rely on internal links alone for the powerful, permanent links because most spiders that enter any site come from external sources rather than spontaneously springing up deep in the bowels of your website.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Deleting low quality content
Hi there. I have a question about deleting low quality content pages hopefully anyone could share your feedback on. We have a b2c ecom store and Product Pages are our target LDPs from search. We've built many information pages that are related to different products in the long past that are linked to related product pages. Problem is many of them lack so-called quality content in terms of volume and quality and they aren't helping. Especially since early this year, organic traffic started declining after having peaked in Feb. So I'm considering deleting those we and Moz consider low quality that are not receiving search traffic. Firstly, is that a good idea? Secondly, how should I go about it? Just delete them and put a redirect so that deleted pages will point to related pages or even homepage? Looking forward to any expert input.
Intermediate & Advanced SEO | | Yuji_m
-Yuji1 -
Responsive Content
At the moment we are thinking about switching to another CMS. We are discussing the use of responsive content.Our developer states that the technique uses hidden content. That is sort of cloaking. At the moment I'm searching for good information or tests with this technique but I can't find anything solid. Do you have some experience with responsive content and is it cloaking? Referring to good articles is also a plus. Looking forward to your answers!
Intermediate & Advanced SEO | | Maxaro.nl0 -
Index or noindex mobile version?
We have a website called imones.lt
Intermediate & Advanced SEO | | FCRMediaLietuva
and we have a mobile version for it m.imones.lt We originally put noindex for m.imones.lt. Is it a good decision or no? We believe that if google indexes both it creates double content. We definitely don't want that? But when someone through google goes to any of imones.lt webpage using smartphone they are redirected to m.imones.lt/whatever Thank you for your opinion.0 -
Website not being indexed after relocation
I have a scenario where a 'draft' website was built using Google Sites, and published using a Google Sites sub domain. Consequently, the 'same' website was rebuilt and published on its own domain. So effectively there were two sites, both more or less identical, with identical content. The first website was thoroughly indexed by Google. The second website has not been indexed at all - I am assuming for the obvious reasons ie. that Google is viewing it as an obvious rip-off of the first site / duplicate content etc. I was reluctant to take down the first website until I had found an effective way to resolve this issue long-term => ensuring that in future Google would index the second 'proper' site. A permanent 301 redirect was put forward as a solution - however, believe it or not, the Google Sites platform has no facility for implementing this. For lack of an alternative solution I have gone ahead and taken down the first site. I understand that this may take some time to drop out of Google's index, however, and I am merely hoping that eventually the second site will be picked up in the index. I would sincerely appreciate an advice or recommendations on the best course of action - if any! - I can take from here. Many thanks! Matt.
Intermediate & Advanced SEO | | collectedrunning0 -
Google Indexing Feedburner Links???
I just noticed that for lots of the articles on my website, there are two results in Google's index. For instance: http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html and http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+thewebhostinghero+(TheWebHostingHero.com) Now my Feedburner feed is set to "noindex" and it's always been that way. The canonical tag on the webpage is set to: rel='canonical' href='http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html' /> The robots tag is set to: name="robots" content="index,follow,noodp" /> I found out that there are scrapper sites that are linking to my content using the Feedburner link. So should the robots tag be set to "noindex" when the requested URL is different from the canonical URL? If so, is there an easy way to do this in Wordpress?
Intermediate & Advanced SEO | | sbrault740 -
Duplicate Content Question
Hey Everyone, I have a question regarding duplicate content. If your site is penalized for duplicate content, is it just the pages with the content on it that are affected or is the whole site affected? Thanks 🙂
Intermediate & Advanced SEO | | jhinchcliffe0 -
Http and https duplicate content?
Hello, This is a quick one or two. 🙂 If I have a page accessible on http and https count as duplicate content? What about external links pointing to my website to the http or https page. Regards, Cornel
Intermediate & Advanced SEO | | Cornel_Ilea0 -
Duplicate content on ecommerce sites
I just want to confirm something about duplicate content. On an eCommerce site, if the meta-titles, meta-descriptions and product descriptions are all unique, yet a big chunk at the bottom (featuring "why buy with us" etc) is copied across all product pages, would each page be penalised, or not indexed, for duplicate content? Does the whole page need to be a duplicate to be worried about this, or would this large chunk of text, bigger than the product description, have an effect on the page. If this would be a problem, what are some ways around it? Because the content is quite powerful, and is relavent to all products... Cheers,
Intermediate & Advanced SEO | | Creode0