I did look into that and my issue is the pages are dynamic. In order to implement it we'd have to create a script to create the correct tag sequence on the fly. Since the pages already load relatively slowly with all of the images I'd prefer a static solution so it doesn't cause the pages to load any slower. Do you have any experience with implementing the rel=prevandnext? Did the script cause any delays? Thanks
- Home
- fthead9
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Latest posts made by fthead9
-
RE: Rel=cannonical vs. noindex.follow for paginated pages
-
Rel=cannonical vs. noindex.follow for paginated pages
I"m working on a real estate site that has multiple listing pages, e.g. http://www.hhcrealestate.com/manhattan-beach-mls-real-estate-listings
I'm trying to get the main result page to rank for that particular geo-keyword, i.e. "manhattan beach homes for sale". I want to make sure all of the individual listings on the paginated pages, 2,3, 4 etc. still get indexed.
Is it better to add
to all of the paginated pages, i.e.manhattan-beach-mls-real-estate-listings-2, manhattan-beach-mls-real-estate-listings--3, manhattan-beach-mls-real-estate-listings-4, etc. or is it better to add noindex,follow to those pages?
-
RE: Is there a way to find out how many 301 redirects a site gets?
I did try OSE but as you noted there isn't any info on that domain. YSE doesn't seem to have any filter that I can find. Anyone have any other suggestions?
-
Is there a way to find out how many 301 redirects a site gets?
If you do a search on "personal loans" on Google the first non-local/personal result is onemainfinanical.com. They have far fewer links showing in OSE and YSE than the other sites. I know onemainfinanical.com is a Citbank site so I'm trying to determine if they are ranking so high b/c they are getting 301 link juice from old Citibank.com authority pages. Is there anyway to check to see what sites are sending link juice through a 301 redirect instead of a direct link?
-
RE: What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Matt/Ryan-
Great discussion, thanks for the input. The staging.domain.com is just one of the domains we don't want indexed. Some of them still need to be accessed by the public, some like staging could be restricted to specific IPs.
I realize after your discussion I probably should have used a different example of a sub-domain. On the other hand it might not have sparked the discussion so maybe it was a good example
-
RE: What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Interesting, hadn't thought of using htaccess to block Googlebot.Thanks for the suggestion.
-
RE: What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Thanks Ryan. So you don't see any issues with de-indexing the main site if I created a second robots.txt file, e.g.
http://staging.domin.com/robots.txt
User-agent: *
Disallow: /That was my initial thought but when Google announced they consider sub-domains part of the TLD I was afraid it might affect the htp://www.domain.com versions of the pages. So you're saying the subdomain is basically treated like a folder you block on the primary domain?
-
What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Now that Google considers subdomains as part of the TLD I'm a little leery of testing robots.txt with something like:
staging.domain.com
User-agent: *
Disallow: /in fear it might get the www.domain.com blocked as well. Has anyone had any success using robots.txt to block sub-domains? I know I could add a meta robots tag to the staging.domain.com pages but that would require a lot more work.
-
RE: I have a site that has both http:// and https:// versions indexed, e.g. https://www.homepage.com/ and http://www.homepage.com/. How do I de-index the https// versions without losing the link juice that is going to the https://homepage.com/ pages?
Thanks John for the suggestion. Unfortunately the https pages aren't separate pages from the http version; one is secure and one isn't but the actual code is identical. The rel canonical tag would appear on both the http and https version. Are you sure Google wouldn't have any issues with the http pages having a rel canonical tag that points to itself?
-
I have a site that has both http:// and https:// versions indexed, e.g. https://www.homepage.com/ and http://www.homepage.com/. How do I de-index the https// versions without losing the link juice that is going to the https://homepage.com/ pages?
I can't 301 https// to http:// since there are some form pages that need to be https:// The site has 20,000 + pages so individually 301ing each page would be a nightmare. Any suggestions would be greatly appreciated.
Best posts made by fthead9
-
Rel=cannonical vs. noindex.follow for paginated pages
I"m working on a real estate site that has multiple listing pages, e.g. http://www.hhcrealestate.com/manhattan-beach-mls-real-estate-listings
I'm trying to get the main result page to rank for that particular geo-keyword, i.e. "manhattan beach homes for sale". I want to make sure all of the individual listings on the paginated pages, 2,3, 4 etc. still get indexed.
Is it better to add
to all of the paginated pages, i.e.manhattan-beach-mls-real-estate-listings-2, manhattan-beach-mls-real-estate-listings--3, manhattan-beach-mls-real-estate-listings-4, etc. or is it better to add noindex,follow to those pages?
Looks like your connection to Moz was lost, please wait while we try to reconnect.