I see that the gap uses gap.com, oldnavy.gap.com and bananarepublic.gap.com. Wouldn't a better approach for SEO to have oldnavy.com, bananarepublic.com and gap.com all separate? Is there any benefit to using the approach of store1.parentcompany.com, store2.parentcompany.com etc? What are the pros and cons to each?
- Home
- kcb8178
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Latest posts made by kcb8178
-
Sub Domain Usage
-
RE: Is there a limit to how many URLs you can put in a robots.txt file?
Great thanks for the input. Per Kristen's post I am worried that it could just block the URLs altogether and they will never get purged from the index.
-
RE: Is there a limit to how many URLs you can put in a robots.txt file?
Yes, we have done that and are seeing traction on those urls, but we can't get rid of these old urls as fast as we would like.
Thanks for your input
-
RE: Is there a limit to how many URLs you can put in a robots.txt file?
Thanks Kristen, thats what I was afraid I would do. Other than Fetch is there a way to send Google these URLs in mass? There are over 100 million URLs so Fetch is not scalable. They are picking them up slowly, but at current pace it will take a few months and I would like to find a way to make it purge faster.
-
Is there a limit to how many URLs you can put in a robots.txt file?
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda.
Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
Looks like your connection to Moz was lost, please wait while we try to reconnect.