If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
- Home
- Gabriele_Layoutweb
Latest posts made by Gabriele_Layoutweb
-
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
-
Duplicate URLs on eCommerce site caused by parameters
Hi there,
We have a client with a large eCommerce site with about 1500 duplicate URLs caused by the parameters in the URLs (such as the sort parameter where the list of products are then sorted by price, age etc.)
Example:
First duplicate URL: www.example.com/cars/toyota?sort=price-ascending
Second duplicate URL: www.example.com/cars/toyota?sort=price-descending
Third duplicate URL: www.example.com/cars/toyota?sort=age-descending
Originally we had advised to add a robots.txt file to block search engines from crawling the URLs with parameters but this hasn't been done.
My question: If we add the robots.txt now and exclude all URLs with filters - how long will it take for Google to disregard the duplicate URLs?
We could ask the developers to add canonical tags to all the duplicates but these are about 1500...
Thanks in advance for any advice!
-
RE: How to avoid duplicate content
Unfortunately we can't control the content on the aggregator website (e.g. with rel="canonical" etc.)
-
RE: How to avoid duplicate content
Hi there,
No we can't control what is being put on the aggregator website (chrono24.com, a large website displaying watches from different dealers).
We won't be changing domain names, copying over all product content, just restyling and adding new content in the about us/services pages.
So I assume the only option is to have Google index our content first. Thanks for the video!
-
How to avoid duplicate content
Hi there,
Our client has an ecommerce website, their products are also showing on an aggregator website (aka on a comparison website where multiple vendors are showing their products). On the aggregator website the same photos, titles and product descriptions are showing.
Now with building their new website, how can we avoid such duplicate content? Or does Google even care in this case? I have read that we could show more product information on their ecommerce website and less details on the aggregator's website. But is there another or better solution?
Many thanks in advance for any input!
-
RE: Pagination parameters and canonical
In this Moz guide regarding Google webmaster recommendations, it says you should still set the paginated page parameter in Google's Webmaster Tools:
https://moz.com/ugc/seo-guide-to-google-webmaster-recommendations-for-pagination (search for the part "Coding Instruction for the View-All Option")
Hope this helps!
-
RE: Excellent performance in BING, terrible performance in GOOGLE
Have you checked the site's backlinks in Moz Open Site Explorer? To me it seems there are some pretty spammy backlinks, could this be the reason?
You could try disavow them here https://www.google.com/webmasters/tools/disavow-links-main
-
RE: Infinite Scroll and SEO - Is it enough to only link to the previous and next page in the pagination?
Hi Andy,
We're not sure yet about the amount of scrolling pages, however the resources you've given here have helped us a lot!
Thanks very much!
-
Infinite Scroll and SEO - Is it enough to only link to the previous and next page in the pagination?
Hi all,
We are implementing an eCommerce site where the results pages of the products will be visibile on one page (always loading new products when you scroll down the page).
Now, I have read that the Google spiders cannot "load" new products scrolling down the page, hence the spider only sees the first few products of the results page.
Our developer wants to implement a system where a users sees the first products on
Then scrolling down, he will see new products with the URL changing to example.com/page/2 and so on.
Is it enough that we add a pagination link that goes from example.com/products to example.com/page/2
Then another link that goes from example.com/page/2 to example.com/page/3 and so on, so the Google spider can make his way through all the pages? Or is that too much deep linking and the spider wouldn't even crawl all the results pages?
Any recommendations how to go about this?
Many thanks in advance!
Looks like your connection to Moz was lost, please wait while we try to reconnect.